WorldWideScience

Sample records for maximum producing time

  1. Optimum design of exploding pusher target to produce maximum neutrons

    International Nuclear Information System (INIS)

    Kitagawa, Y.; Miyanaga, N.; Kato, Y.; Nakatsuka, M.; Nishiguchi, A.; Yabe, T.; Yamanaka, C.

    1985-03-01

    Exploding pusher target experiments have been conducted with the 1.052-μm GEKKO MII two-beam glass laser system to design an optimum target, which couples to the incident laser light most effectively to produce the maximum neutrons. Since hot electrons preheat the shell entirely in spite of strongly nonuniform irradiation, a simple model can design the optimum target, of which the shell/fuel interface is accelerated to 0.5 to 0.7 times the initial radius within a laser pulse. A 2-dimensional computer simulation supports this target design. The scaling of the neutron yield N with the laser power P is N ∝ P 2.4±0.4 . (author)

  2. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  3. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  4. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  6. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  7. The maximum specific hydrogen-producing activity of anaerobic mixed cultures: definition and determination.

    Science.gov (United States)

    Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing

    2014-06-10

    Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.

  8. The maximum specific hydrogen-producing activity of anaerobic mixed cultures: definition and determination

    Science.gov (United States)

    Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing

    2014-06-01

    Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.

  9. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  10. 50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.

    Science.gov (United States)

    2010-10-01

    ... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...

  11. The maximum percentage of fly ash to replace part of original Portland cement (OPC) in producing high strength concrete

    Science.gov (United States)

    Mallisa, Harun; Turuallo, Gidion

    2017-11-01

    This research investigates the maximum percent of fly ash to replace part of Orginal Portland Cement (OPC) in producing high strength concrete. Many researchers have found that the incorporation of industrial by-products such as fly ash as in producing concrete can improve properties in both fresh and hardened state of concrete. The water-binder ratio was used 0.30. The used sand was medium sand with the maximum size of coarse aggregate was 20 mm. The cement was Type I, which was Bosowa Cement produced by PT Bosowa. The percentages of fly ash to the total of a binder, which were used in this research, were 0, 10, 15, 20, 25 and 30%; while the super platicizer used was typed Naptha 511P. The results showed that the replacement cement up to 25 % of the total weight of binder resulted compressive strength higher than the minimum strength at one day of high-strength concrete.

  12. Statistics of the first passage time of Brownian motion conditioned by maximum value or area

    International Nuclear Information System (INIS)

    Kearney, Michael J; Majumdar, Satya N

    2014-01-01

    We derive the moments of the first passage time for Brownian motion conditioned by either the maximum value or the area swept out by the motion. These quantities are the natural counterparts to the moments of the maximum value and area of Brownian excursions of fixed duration, which we also derive for completeness within the same mathematical framework. Various applications are indicated. (paper)

  13. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  14. Regular transport dynamics produce chaotic travel times.

    Science.gov (United States)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  15. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  16. 49 CFR 398.6 - Hours of service of drivers; maximum driving time.

    Science.gov (United States)

    2010-10-01

    ... REGULATIONS TRANSPORTATION OF MIGRANT WORKERS § 398.6 Hours of service of drivers; maximum driving time. No person shall drive nor shall any motor carrier permit or require a driver employed or used by it to drive...

  17. Trading Time with Space - Development of subduction zone parameter database for a maximum magnitude correlation assessment

    Science.gov (United States)

    Schaefer, Andreas; Wenzel, Friedemann

    2017-04-01

    technically trades time with space, considering subduction zones where we have likely not observed the maximum possible event yet. However, by identifying sources of the same class, the not-yet observed temporal behavior can be replaced by spatial similarity among different subduction zones. This database aims to enhance the research and understanding of subduction zones and to quantify their potential in producing mega earthquakes considering potential strong motion impact on nearby cities and their tsunami potential.

  18. Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation

    Directory of Open Access Journals (Sweden)

    Su Yeon Oh

    2006-06-01

    Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.

  19. A maximum principle for time dependent transport in systems with voids

    International Nuclear Information System (INIS)

    Schofield, S.L.; Ackroyd, R.T.

    1996-01-01

    A maximum principle is developed for the first-order time dependent Boltzmann equation. The maximum principle is a generalization of Schofield's κ(θ) principle for the first-order steady state Boltzmann equation, and provides a treatment of time dependent transport in systems with void regions. The formulation comprises a direct least-squares minimization allied with a suitable choice of bilinear functional, and gives rise to a maximum principle whose functional is free of terms that have previously led to difficulties in treating void regions. (Author)

  20. Stochastic behavior of a cold standby system with maximum repair time

    Directory of Open Access Journals (Sweden)

    Ashish Kumar

    2015-09-01

    Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.

  1. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  2. Prime Time Power: Women Producers, Writers and Directors on TV.

    Science.gov (United States)

    Steenland, Sally

    This report analyzes the number of women working in the following six decision making jobs in prime time television: (1) executive producer; (2) supervising producer; (3) producer; (4) co-producer; (5) writer; and (6) director. The women who hold these positions are able to influence the portrayal of women on television as well as to improve the…

  3. A polynomial time algorithm for solving the maximum flow problem in directed networks

    International Nuclear Information System (INIS)

    Tlas, M.

    2015-01-01

    An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)

  4. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  5. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  6. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    Directory of Open Access Journals (Sweden)

    Petré Frederik

    2004-01-01

    Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.

  7. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    International Nuclear Information System (INIS)

    Toogoshi, M; Kano, S S; Zempo, Y

    2015-01-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper. (paper)

  8. Short-time maximum entropy method analysis of molecular dynamics simulation: Unimolecular decomposition of formic acid

    Science.gov (United States)

    Takahashi, Osamu; Nomura, Tetsuo; Tabayashi, Kiyohiko; Yamasaki, Katsuyoshi

    2008-07-01

    We performed spectral analysis by using the maximum entropy method instead of the traditional Fourier transform technique to investigate the short-time behavior in molecular systems, such as the energy transfer between vibrational modes and chemical reactions. This procedure was applied to direct ab initio molecular dynamics calculations for the decomposition of formic acid. More reactive trajectories of dehydrolation than those of decarboxylation were obtained for Z-formic acid, which was consistent with the prediction of previous theoretical and experimental studies. Short-time maximum entropy method analyses were performed for typical reactive and non-reactive trajectories. Spectrograms of a reactive trajectory were obtained; these clearly showed the reactant, transient, and product regions, especially for the dehydrolation path.

  9. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  10. Time at which the maximum of a random acceleration process is reached

    International Nuclear Information System (INIS)

    Majumdar, Satya N; Rosso, Alberto; Zoia, Andrea

    2010-01-01

    We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density p(t m |T) of the time t m at which the process reaches its maximum, within a fixed time interval [0, T]. We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.

  11. STATIONARITY OF ANNUAL MAXIMUM DAILY STREAMFLOW TIME SERIES IN SOUTH-EAST BRAZILIAN RIVERS

    Directory of Open Access Journals (Sweden)

    Jorge Machado Damázio

    2015-08-01

    Full Text Available DOI: 10.12957/cadest.2014.18302The paper presents a statistical analysis of annual maxima daily streamflow between 1931 and 2013 in South-East Brazil focused in detecting and modelling non-stationarity aspects. Flood protection for the large valleys in South-East Brazil is provided by multiple purpose reservoir systems built during 20th century, which design and operation plans has been done assuming stationarity of historical flood time series. Land cover changes and rapidly-increasing level of atmosphere greenhouse gases of the last century may be affecting flood regimes in these valleys so that it can be that nonstationary modelling should be applied to re-asses dam safety and flood control operation rules at the existent reservoir system. Six annual maximum daily streamflow time series are analysed. The time series were plotted together with fitted smooth loess functions and non-parametric statistical tests are performed to check the significance of apparent trends shown by the plots. Non-stationarity is modelled by fitting univariate extreme value distribution functions which location varies linearly with time. Stationarity and non-stationarity modelling are compared with the likelihood ratio statistic. In four of the six analyzed time series non-stationarity modelling outperformed stationarity modelling.Keywords: Stationarity; Extreme Value Distributions; Flood Frequency Analysis; Maximum Likelihood Method.

  12. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  13. Determination of times maximum insulation in case of internal flooding by pipe break

    International Nuclear Information System (INIS)

    Varas, M. I.; Orteu, E.; Laserna, J. A.

    2014-01-01

    This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)

  14. The Research of Car-Following Model Based on Real-Time Maximum Deceleration

    Directory of Open Access Journals (Sweden)

    Longhai Yang

    2015-01-01

    Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.

  15. Optimal protocol for maximum work extraction in a feedback process with a time-varying potential

    Science.gov (United States)

    Kwon, Chulan

    2017-12-01

    The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.

  16. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  17. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  18. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  19. Polar coordinated fuzzy controller based real-time maximum-power point control of photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Syafaruddin; Hiyama, Takashi [Department of Computer Science and Electrical Engineering of Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Karatepe, Engin [Department of Electrical and Electronics Engineering of Ege University, 35100 Bornova-Izmir (Turkey)

    2009-12-15

    It is crucial to improve the photovoltaic (PV) system efficiency and to develop the reliability of PV generation control systems. There are two ways to increase the efficiency of PV power generation system. The first is to develop materials offering high conversion efficiency at low cost. The second is to operate PV systems optimally. However, the PV system can be optimally operated only at a specific output voltage and its output power fluctuates under intermittent weather conditions. Moreover, it is very difficult to test the performance of a maximum-power point tracking (MPPT) controller under the same weather condition during the development process and also the field testing is costly and time consuming. This paper presents a novel real-time simulation technique of PV generation system by using dSPACE real-time interface system. The proposed system includes Artificial Neural Network (ANN) and fuzzy logic controller scheme using polar information. This type of fuzzy logic rules is implemented for the first time to operate the PV module at optimum operating point. ANN is utilized to determine the optimum operating voltage for monocrystalline silicon, thin-film cadmium telluride and triple junction amorphous silicon solar cells. The verification of availability and stability of the proposed system through the real-time simulator shows that the proposed system can respond accurately for different scenarios and different solar cell technologies. (author)

  20. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery

    Directory of Open Access Journals (Sweden)

    Kazuhiro P. Izawa

    2017-12-01

    Full Text Available Background and aims: Maximum phonation time (MPT, which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR. Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29 and older-aged group (≥65 years, n = 21. MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001 than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001. However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%. In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20. Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  1. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery.

    Science.gov (United States)

    Izawa, Kazuhiro P; Kasahara, Yusuke; Hiraki, Koji; Hirano, Yasuyuki; Watanabe, Satoshi

    2017-12-21

    Background and aims: Maximum phonation time (MPT), which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR). Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29) and older-aged group (≥65 years, n = 21). MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001) than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001). However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%). In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20). Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  2. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  3. Relative timing of last glacial maximum and late-glacial events in the central tropical Andes

    Science.gov (United States)

    Bromley, Gordon R. M.; Schaefer, Joerg M.; Winckler, Gisela; Hall, Brenda L.; Todd, Claire E.; Rademaker, Kurt M.

    2009-11-01

    Whether or not tropical climate fluctuated in synchrony with global events during the Late Pleistocene is a key problem in climate research. However, the timing of past climate changes in the tropics remains controversial, with a number of recent studies reporting that tropical ice age climate is out of phase with global events. Here, we present geomorphic evidence and an in-situ cosmogenic 3He surface-exposure chronology from Nevado Coropuna, southern Peru, showing that glaciers underwent at least two significant advances during the Late Pleistocene prior to Holocene warming. Comparison of our glacial-geomorphic map at Nevado Coropuna to mid-latitude reconstructions yields a striking similarity between Last Glacial Maximum (LGM) and Late-Glacial sequences in tropical and temperate regions. Exposure ages constraining the maximum and end of the older advance at Nevado Coropuna range between 24.5 and 25.3 ka, and between 16.7 and 21.1 ka, respectively, depending on the cosmogenic production rate scaling model used. Similarly, the mean age of the younger event ranges from 10 to 13 ka. This implies that (1) the LGM and the onset of deglaciation in southern Peru occurred no earlier than at higher latitudes and (2) that a significant Late-Glacial event occurred, most likely prior to the Holocene, coherent with the glacial record from mid and high latitudes. The time elapsed between the end of the LGM and the Late-Glacial event at Nevado Coropuna is independent of scaling model and matches the period between the LGM termination and Late-Glacial reversal in classic mid-latitude records, suggesting that these events in both tropical and temperate regions were in phase.

  4. Educating Farmers' Market Consumers on Best Practices for Retaining Maximum Nutrient and Phytonutrient Levels in Local Produce

    Science.gov (United States)

    Ralston, Robin A.; Orr, Morgan; Goard, Linnette M.; Taylor, Christopher A.; Remley, Dan

    2016-01-01

    Few farmers' market consumers are aware of how to retain optimal nutritional quality of produce following purchase. Our objective was to develop and evaluate educational materials intended to inform market consumers about best practices for storing, preserving, and consuming local produce to maximize nutrients and phytonutrients. Printed…

  5. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    Science.gov (United States)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  6. Increasing the maximum daily operation time of MNSR reactor by modifying its cooling system

    International Nuclear Information System (INIS)

    Khamis, I.; Hainoun, A.; Al Halbi, W.; Al Isa, S.

    2006-08-01

    thermal-hydraulic natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The model considers detailed description of the thermal and hydraulic aspects of cooling in the core and vessel. In addition, determination of pressure drop was made through an elaborate balancing of the overall pressure drop in the core against the sum of all individual channel pressure drops employing an iterative scheme. Using this model, an accurate estimation of various timely core-averaged hydraulic parameters such as generated power, hydraulic diameters, flow cross area, ... etc. for each one of the ten-fuel circles in the core can be made. Furthermore, distribution of coolant and fuel temperatures, including maximum fuel temperature and its location in the core, can now be determined. Correlation among core-coolant average temperature, reactor power, and core-coolant inlet temperature, during both steady and transient cases, have been established and verified against experimental data. Simulating various operating condition of MNSR, good agreement is obtained for at different power levels. Various schemes of cooling have been investigated for the purpose of assessing potential benefits on the operational characteristics of the syrian MNSR reactor. A detailed thermal hydraulic model for the analysis of MNSR has been developed. The analysis shows that an auxiliary cooling system, for the reactor vessel or installed in the pool which surrounds the lower section of the reactor vessel, will significantly offset the consumption of excess reactivity due to the negative reactivity temperature coefficient. Hence, the maximum operating time of the reactor is extended. The model considers detailed description of the thermal and hydraulic aspects of cooling the core and its surrounding vessel. Natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The suggested 'micro model

  7. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  8. The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare

    Energy Technology Data Exchange (ETDEWEB)

    Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)

    1997-09-01

    Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.

  9. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...

  10. Maximum leaf conductance driven by CO2 effects on stomatal size and density over geologic time.

    Science.gov (United States)

    Franks, Peter J; Beerling, David J

    2009-06-23

    Stomatal pores are microscopic structures on the epidermis of leaves formed by 2 specialized guard cells that control the exchange of water vapor and CO(2) between plants and the atmosphere. Stomatal size (S) and density (D) determine maximum leaf diffusive (stomatal) conductance of CO(2) (g(c(max))) to sites of assimilation. Although large variations in D observed in the fossil record have been correlated with atmospheric CO(2), the crucial significance of similarly large variations in S has been overlooked. Here, we use physical diffusion theory to explain why large changes in S necessarily accompanied the changes in D and atmospheric CO(2) over the last 400 million years. In particular, we show that high densities of small stomata are the only way to attain the highest g(cmax) values required to counter CO(2)"starvation" at low atmospheric CO(2) concentrations. This explains cycles of increasing D and decreasing S evident in the fossil history of stomata under the CO(2) impoverished atmospheres of the Permo-Carboniferous and Cenozoic glaciations. The pattern was reversed under rising atmospheric CO(2) regimes. Selection for small S was crucial for attaining high g(cmax) under falling atmospheric CO(2) and, therefore, may represent a mechanism linking CO(2) and the increasing gas-exchange capacity of land plants over geologic time.

  11. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    Science.gov (United States)

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  12. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  13. Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications

    Energy Technology Data Exchange (ETDEWEB)

    Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz

    2016-12-01

    This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.

  14. A Maximum Entropy-Based Chaotic Time-Variant Fragile Watermarking Scheme for Image Tampering Detection

    Directory of Open Access Journals (Sweden)

    Guo-Jheng Yang

    2013-08-01

    Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.

  15. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  16. Maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere in a milk producing area

    Energy Technology Data Exchange (ETDEWEB)

    Bryant, P M

    1963-01-01

    A method is given for calculating, for design purposes, the maximum permissible continuous release rates of phosphorus-32 and sulphur-35 to atmosphere with respect to milk contamination. In the absence of authoritative advice from the Medical Research Council, provisional working levels for the concentration of phosphorus-32 and sulphur-35 in milk are derived, and details are given of the agricultural assumptions involved in the calculation of the relationship between the amount of the nuclide deposited on grassland and that to be found in milk. The agricultural and meteorological conditions assumed are applicable as an annual average to England and Wales. The results (in mc/day) for phosphorus-32 and sulphur-35 for a number of stack heights and distances are shown graphically; typical values, quoted in a table, include 20 mc/day of phosphorus-32 and 30 mc/day of sulfur-35 as the maximum permissible continuous release rates with respect to ground level releases at a distance of 200 metres from pastureland.

  17. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  18. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  19. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  20. The Sidereal Time Variations of the Lorentz Force and Maximum Attainable Speed of Electrons

    Science.gov (United States)

    Nowak, Gabriel; Wojtsekhowski, Bogdan; Roblin, Yves; Schmookler, Barak

    2016-09-01

    The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab produces electrons that orbit through a known magnetic system. The electron beam's momentum can be determined through the radius of the beam's orbit. This project compares the beam orbit's radius while travelling in a transverse magnetic field with theoretical predictions from special relativity, which predict a constant beam orbit radius. Variations in the beam orbit's radius are found by comparing the beam's momentum entering and exiting a magnetic arc. Beam position monitors (BPMs) provide the information needed to calculate the beam momentum. Multiple BPM's are included in the analysis and fitted using the method of least squares to decrease statistical uncertainty. Preliminary results from data collected over a 24 hour period show that the relative momentum change was less than 10-4. Further study will be conducted including larger time spans and stricter cuts applied to the BPM data. The data from this analysis will be used in a larger experiment attempting to verify special relativity. While the project is not traditionally nuclear physics, it involves the same technology (the CEBAF accelerator) and the same methods (ROOT) as a nuclear physics experiment. DOE SULI Program.

  1. Surface of Maximums of AR(2 Process Spectral Densities and its Application in Time Series Statistics

    Directory of Open Access Journals (Sweden)

    Alexander V. Ivanov

    2017-09-01

    Conclusions. The obtained formula of surface of maximums of noise spectral densities gives an opportunity to realize for which values of AR(2 process characteristic polynomial coefficients it is possible to look for greater rate of convergence to zero of the probabilities of large deviations of the considered estimates.

  2. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  3. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  4. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  5. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  6. Estimation of daily maximum and minimum air temperatures in urban landscapes using MODIS time series satellite data

    Science.gov (United States)

    Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.

    2018-03-01

    Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.

  7. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  8. Detection of surface electromyography recording time interval without muscle fatigue effect for biceps brachii muscle during maximum voluntary contraction.

    Science.gov (United States)

    Soylu, Abdullah Ruhi; Arpinar-Avsar, Pinar

    2010-08-01

    The effects of fatigue on maximum voluntary contraction (MVC) parameters were examined by using force and surface electromyography (sEMG) signals of the biceps brachii muscles (BBM) of 12 subjects. The purpose of the study was to find the sEMG time interval of the MVC recordings which is not affected by the muscle fatigue. At least 10s of force and sEMG signals of BBM were recorded simultaneously during MVC. The subjects reached the maximum force level within 2s by slightly increasing the force, and then contracted the BBM maximally. The time index of each sEMG and force signal were labeled with respect to the time index of the maximum force (i.e. after the time normalization, each sEMG or force signal's 0s time index corresponds to maximum force point). Then, the first 8s of sEMG and force signals were divided into 0.5s intervals. Mean force, median frequency (MF) and integrated EMG (iEMG) values were calculated for each interval. Amplitude normalization was performed by dividing the force signals to their mean values of 0s time intervals (i.e. -0.25 to 0.25s). A similar amplitude normalization procedure was repeated for the iEMG and MF signals. Statistical analysis (Friedman test with Dunn's post hoc test) was performed on the time and amplitude normalized signals (MF, iEMG). Although the ANOVA results did not give statistically significant information about the onset of the muscle fatigue, linear regression (mean force vs. time) showed a decreasing slope (Pearson-r=0.9462, pfatigue starts after the 0s time interval as the muscles cannot attain their peak force levels. This implies that the most reliable interval for MVC calculation which is not affected by the muscle fatigue is from the onset of the EMG activity to the peak force time. Mean, SD, and range of this interval (excluding 2s gradual increase time) for 12 subjects were 2353, 1258ms and 536-4186ms, respectively. Exceeding this interval introduces estimation errors in the maximum amplitude calculations

  9. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    Science.gov (United States)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  10. Optimization of NANOGrav's time allocation for maximum sensitivity to single sources

    International Nuclear Information System (INIS)

    Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel

    2014-01-01

    Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.

  11. Separation of Stochastic and Deterministic Information from Seismological Time Series with Nonlinear Dynamics and Maximum Entropy Methods

    International Nuclear Information System (INIS)

    Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias

    2007-01-01

    We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information

  12. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  13. ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP

    Directory of Open Access Journals (Sweden)

    Gorbachov, P.

    2013-01-01

    Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.

  14. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2017-01-01

    Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.

  15. Timing A Pulsed Thin Film Pyroelectric Generator For Maximum Power Density

    International Nuclear Information System (INIS)

    Smith, A.N.; Hanrahan, B.M.; Neville, C.J.; Jankowski, N.R

    2016-01-01

    Pyroelectric thermal-to-electric energy conversion is accomplished by a cyclic process of thermally-inducing polarization changes in the material under an applied electric field. The pyroelectric MEMS device investigated consisted of a thin film PZT capacitor with platinum bottom and iridium oxide top electrodes. Electric fields between 1-20 kV/cm with a 30% duty cycle and frequencies from 0.1 - 100 Hz were tested with a modulated continuous wave IR laser with a duty cycle of 20% creating temperature swings from 0.15 - 26 °C on the pyroelectric receiver. The net output power of the device was highly sensitive to the phase delay between the laser power and the applied electric field. A thermal model was developed to predict and explain the power loss associated with finite charge and discharge times. Excellent agreement was achieved between the theoretical model and the experiment results for the measured power density versus phase delay. Limitations on the charging and discharging rates result in reduced power and lower efficiency due to a reduced net work per cycle. (paper)

  16. Use of queue modelling in the analysis of elective patient treatment governed by a maximum waiting time policy

    DEFF Research Database (Denmark)

    Kozlowski, Dawid; Worthington, Dave

    2015-01-01

    chain and discrete event simulation models, to provide an insightful analysis of the public hospital performance under the policy rules. The aim of this paper is to support the enhancement of the quality of elective patient care, to be brought about by better understanding of the policy implications...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...

  17. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    NARCIS (Netherlands)

    Leus, G.; Petré, F.; Moonen, M.

    2004-01-01

    In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input

  18. Distinct timing mechanisms produce discrete and continuous movements.

    Directory of Open Access Journals (Sweden)

    Raoul Huys

    2008-04-01

    Full Text Available The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems to accomplish varying behavioral functions such as speed constraints.

  19. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    Science.gov (United States)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  20. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    International Nuclear Information System (INIS)

    Almog, Assaf; Garlaschelli, Diego

    2014-01-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information. (paper)

  1. Spectrophotometric analysis of tomato plants produced from seeds exposed under space flight conditions for a long time

    Science.gov (United States)

    Nechitailo, Galina S.; Yurov, S.; Cojocaru, A.; Revin, A.

    The analysis of the lycopene and other carotenoids in tomatoes produced from seeds exposed under space flight conditions at the orbital station MIR for six years is presented in this work. Our previous experiments with tomato plants showed the germination of seeds to be 32%Genetic investigations revealed 18%in the experiment and 8%experiments were conducted to study the capacity of various stimulating factors to increase germination of seeds exposed for a long time to the action of space flight factors. An increase of 20%achieved but at the same time mutants having no analogues in the control variants were detected. For the present investigations of the third generation of plants produced from seeds stored for a long time under space flight conditions 80 tomatoes from forty plants were selected. The concentration of lycopene in the experimental specimens was 2.5-3 times higher than in the control variants. The spectrophotometric analysis of ripe tomatoes revealed typical three-peaked carotenoid spectra with a high maximum of lycopene (a medium maximum at 474 nm), a moderate maximum of its predecessor, phytoin, (a medium maximum at 267 nm) and a low maximum of carotenes. In green tomatoes, on the contrary, a high maximum of phytoin, a moderate maximum of lycopene and a low maximum of carotenes were observed. The results of the spectral analysis point to the retardation of biosynthesis of carotenes while the production of lycopene is increased and to the synthesis of lycopene from phytoin. Electric conduction of tomato juice in the experimental samples is increased thus suggesting higher amounts of carotenoids, including lycopene and electrolytes. The higher is the value of electric conduction of a specimen, the higher are the spectral maxima of lycopene. The hydrogen ion exponent of the juice of ripe tomatoes increases due to which the efficiency of ATP biosynthesis in cell mitochondria is likely to increase, too. The results demonstrating an increase in the content

  2. ?Just-in-Time? Battery Charge Depletion Control for PHEVs and E-REVs for Maximum Battery Life

    Energy Technology Data Exchange (ETDEWEB)

    DeVault, Robert C [ORNL

    2009-01-01

    Conventional methods of vehicle operation for Plug-in Hybrid Vehicles first discharge the battery to a minimum State of Charge (SOC) before switching to charge sustaining operation. This is very demanding on the battery, maximizing the number of trips ending with a depleted battery and maximizing the distance driven on a depleted battery over the vehicle s life. Several methods have been proposed to reduce the number of trips ending with a deeply discharged battery and also eliminate the need for extended driving on a depleted battery. An optimum SOC can be maintained for long battery life before discharging the battery so that the vehicle reaches an electric plug-in destination just as the battery reaches the minimum operating SOC. These Just-in-Time methods provide maximum effective battery life while getting virtually the same electricity from the grid.

  3. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  4. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  5. Non-Invasive Rapid Harvest Time Determination of Oil-Producing Microalgae Cultivations for Biodiesel Production by Using Chlorophyll Fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, Yaqin [Key Laboratory of Algal Biology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan (China); University of Chinese Academy of Sciences, Beijing (China); Rong, Junfeng [SINOPEC Research Institute of Petroleum Processing, Beijing (China); Chen, Hui; He, Chenliu; Wang, Qiang, E-mail: wangqiang@ihb.ac.cn [Key Laboratory of Algal Biology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan (China)

    2015-10-05

    For the large-scale cultivation of microalgae for biodiesel production, one of the key problems is the determination of the optimum time for algal harvest when algae cells are saturated with neutral lipids. In this study, a method to determine the optimum harvest time in oil-producing microalgal cultivations by measuring the maximum photochemical efficiency of photosystem II, also called Fv/Fm, was established. When oil-producing Chlorella strains were cultivated and then treated with nitrogen starvation, it not only stimulated neutral lipid accumulation, but also affected the photosynthesis system, with the neutral lipid contents in all four algae strains – Chlorella sorokiniana C1, Chlorella sp. C2, C. sorokiniana C3, and C. sorokiniana C7 – correlating negatively with the Fv/Fm values. Thus, for the given oil-producing algae, in which a significant relationship between the neutral lipid content and Fv/Fm value under nutrient stress can be established, the optimum harvest time can be determined by measuring the value of Fv/Fm. It is hoped that this method can provide an efficient way to determine the harvest time rapidly and expediently in large-scale oil-producing microalgae cultivations for biodiesel production.

  6. INFRARED STUDIES OF HUMAN SALIVA. IDENTIFICATION OF A FACTOR IN HUMAN SALIVA PRODUCING AN INFRARED ABSORBANCE MAXIMUM AT 4.9 MICRONS

    Science.gov (United States)

    An absorption maximum was observed at 4.9 microns in infrared spectra of human parotid saliva. The factor causing this absorbance was found to be a...nitrate, and heat stability. Thiocyanate was then determined in 16 parotid saliva samples by a spectrophotometric method, which involved formation of

  7. Studying DDT Susceptibility at Discriminating Time Intervals Focusing on Maximum Limit of Exposure Time Survived by DDT Resistant Phlebotomus argentipes (Diptera: Psychodidae): an Investigative Report.

    Science.gov (United States)

    Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay

    2017-07-24

    Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.

  8. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  9. Effect of temperature and hydraulic retention time on hydrogen producing granules: Homoacetogenesis and morphological characteristics

    International Nuclear Information System (INIS)

    Abreu, A. A.; Danko, A. S.; Alves, M. M.

    2009-01-01

    The effect of temperature and hydraulic retention time (HRT) on the homoacetogenesisi and on the morphological characteristics of hydrogen producing granules was investigated. Hydrogen was produced using an expanded granular sludge blanket (EGSB) reactor, fed with glucose and L-arabinose, under mesophilic (37 degree centigrade), thermophilic (55 degree centigrade), and hyper thermophilic (70 degree centigrade) conditions. (Author)

  10. Time to reach tacrolimus maximum blood concentration,mean residence time, and acute renal allograft rejection: an open-label, prospective, pharmacokinetic study in adult recipients.

    Science.gov (United States)

    Kuypers, Dirk R J; Vanrenterghem, Yves

    2004-11-01

    The aims of this study were to determine whether disposition-related pharmacokinetic parameters such as T(max) and mean residence time (MRT) could be used as predictors of clinical efficacy of tacrolimus in renal transplant recipients, and to what extent these parameters would be influenced by clinical variables. We previously demonstrated, in a prospective pharmacokinetic study in de novo renal allograft recipients, that patients who experienced early acute rejection did not differ from patients free from rejection in terms of tacrolimus pharmacokinetic exposure parameters (dose interval AUC, preadministration trough blood concentration, C(max), dose). However, recipients with acute rejection reached mean (SD) tacrolimus T(max) significantly faster than those who were free from rejection (0.96 [0.56] hour vs 1.77 [1.06] hours; P clearance nor T(1/2) could explain this unusual finding, we used data from the previous study to calculate MRT from the concentration-time curves. As part of the previous study, 100 patients (59 male, 41 female; mean [SD] age, 51.4 [13.8] years;age range, 20-75 years) were enrolled in the study The calculated MRT was significantly shorter in recipients with acute allograft rejection (11.32 [031] hours vs 11.52 [028] hours; P = 0.02), just like T(max) was an independent risk factor for acute rejection in a multivariate logistic regression model (odds ratio, 0.092 [95% CI, 0.014-0.629]; P = 0.01). Analyzing the impact of demographic, transplantation-related, and biochemical variables on MRT, we found that increasing serum albumin and hematocrit concentrations were associated with a prolonged MRT (P calculated MRT were associated with a higher incidence of early acute graft rejection. These findings suggest that a shorter transit time of tacrolimus in certain tissue compartments, rather than failure to obtain a maximum absolute tacrolimus blood concentration, might lead to inadequate immunosuppression early after transplantation.

  11. Timing of glacier advances and climate in the High Tatra Mountains (Western Carpathians) during the Last Glacial Maximum

    Science.gov (United States)

    Makos, Michał; Dzierżek, Jan; Nitychoruk, Jerzy; Zreda, Marek

    2014-07-01

    During the Last Glacial Maximum (LGM), long valley glaciers developed on the northern and southern sides of the High Tatra Mountains, Poland and Slovakia. Chlorine-36 exposure dating of moraine boulders suggests two major phases of moraine stabilization, at 26-21 ka (LGM I - maximum) and at 18 ka (LGM II). The dates suggest a significantly earlier maximum advance on the southern side of the range. Reconstructing the geometry of four glaciers in the Sucha Woda, Pańszczyca, Mlynicka and Velicka valleys allowed determining their equilibrium-line altitudes (ELAs) at 1460, 1460, 1650 and 1700 m asl, respectively. Based on a positive degree-day model, the mass balance and climatic parameter anomaly (temperature and precipitation) has been constrained for LGM I advance. Modeling results indicate slightly different conditions between northern and southern slopes. The N-S ELA gradient finds confirmation in slightly higher temperature (at least 1 °C) or lower precipitation (15%) on the south-facing glaciers during LGM I. The precipitation distribution over the High Tatra Mountains indicates potentially different LGM atmospheric circulation than at the present day, with reduced northwesterly inflow and increased southerly and westerly inflows of moist air masses.

  12. Formation time of hadrons and density of matter produced in relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Pisut, J.; Zavada, P.

    1994-06-01

    Densities of interacting hadronic matter produced in Oxygen-Lead and Sulphur-Lead collisions at 200 GeV/nucleon are estimated as a function of the formation time of hadrons. Uncertainties in our knowledge of the critical temperature T c and of the formation time of hadrons τ 0 permit at present three scenarios: an optimistic one (QGP has already been produced in collisions of Oxygen and Sulphur with heavy ions and will be copiously in Lead collisions), a pessimistic one (QGP cannot be produced at 200 GeV/nucleon) and an intermediate one (QGP has not been produced in Oxygen and Sulphur Interactions with heavy ions and will be at best produced only marginally in Pb-collisions). The last option is found to be the most probable. (author)

  13. Effects of the Maximum Luminance in a Medical-grade Liquid-crystal Display on the Recognition Time of a Test Pattern: Observer Performance Using Landolt Rings.

    Science.gov (United States)

    Doi, Yasuhiro; Matsuyama, Michinobu; Ikeda, Ryuji; Hashida, Masahiro

    2016-07-01

    This study was conducted to measure the recognition time of the test pattern and to investigate the effects of the maximum luminance in a medical-grade liquid-crystal display (LCD) on the recognition time. Landolt rings as signals of the test pattern were used with four random orientations, one on each of the eight gray-scale steps. Ten observers input the orientation of the gap on the Landolt rings using cursor keys on the keyboard. The recognition times were automatically measured from the display of the test pattern on the medical-grade LCD to the input of the orientation of the gap in the Landolt rings. The maximum luminance in this study was set to one of four values (100, 170, 250, and 400 cd/m(2)), for which the corresponding recognition times were measured. As a result, the average recognition times for each observer with maximum luminances of 100, 170, 250, and 400 cd/m(2) were found to be 3.96 to 7.12 s, 3.72 to 6.35 s, 3.53 to 5.97 s, and 3.37 to 5.98 s, respectively. The results indicate that the observer's recognition time is directly proportional to the luminance of the medical-grade LCD. Therefore, it is evident that the maximum luminance of the medical-grade LCD affects the test pattern recognition time.

  14. A Time- and Cost-Saving Method of Producing Rat Polyclonal Antibodies

    International Nuclear Information System (INIS)

    Wakayama, Tomohiko; Kato, Yukio; Utsumi, Rie; Tsuji, Akira; Iseki, Shoichi

    2006-01-01

    Producing antibodies usually takes more than three months. In the present study, we introduce a faster way of producing polyclonal antibodies based on preparation of the recombinant oligopeptide as antigen followed by immunization of rats. Using this method, we produced antisera against two mouse proteins: ERGIC-53 and c-Kit. An expression vector ligated with a pair of complementary synthetic oligodeoxyribonucleotides encoding the protein was introduced into bacteria, and the recombinant oligopeptide fused with the carrier protein glutathione-S-transferase was purified. Wistar rats were immunized by injecting the emulsified antigen subcutaneously into the hind footpads, followed by a booster injection after 2 weeks. One week after the booster, the sera were collected and examined for the antibody titer by immunohistochemistry. Antisera with 1600-fold titer at the maximum were obtained for both antigens and confirmed for their specificity by Western blotting. Anti-ERGIC-53 antisera recognized acinar cells in the sublingual gland, and anti-c-Kit antisera recognized spermatogenic and Leydig cells in the testis. These antisera were applicable to fluorescent double immunostaining with mouse monoclonal or rabbit polyclonal antibodies. Consequently, this method enabled us to produce specific rat polyclonal antisera available for immunohistochemistry in less than one month at a relatively low cost

  15. Timing of maximum glacial extent and deglaciation from HualcaHualca volcano (southern Peru), obtained with cosmogenic 36Cl.

    Science.gov (United States)

    Alcalá, Jesus; Palacios, David; Vazquez, Lorenzo; Juan Zamorano, Jose

    2015-04-01

    Andean glacial deposits are key records of climate fluctuations in the southern hemisphere. During the last decades, in situ cosmogenic nuclides have provided fresh and significant dates to determine past glacier behavior in this region. But still there are many important discrepancies such as the impact of Last Glacial Maximum or the influence of Late Glacial climatic events on glacial mass balances. Furthermore, glacial chronologies from many sites are still missing, such as HualcaHualca (15° 43' S; 71° 52' W; 6,025 masl), a high volcano of the Peruvian Andes located 70 km northwest of Arequipa. The goal of this study is to establish the age of the Maximum Glacier Extent (MGE) and deglaciation at HualcaHualca volcano. To achieve this objetive, we focused in four valleys (Huayuray, Pujro Huayjo, Mollebaya and Mucurca) characterized by a well-preserved sequence of moraines and roches moutonnées. The method is based on geomorphological analysis supported by cosmogenic 36Cl surface exposure dating. 36Cl ages have been estimated with the CHLOE calculator and were compared with other central Andean glacial chronologies as well as paleoclimatological proxies. In Huayuray valley, exposure ages indicates that MGE occurred ~ 18 - 16 ka. Later, the ice mass gradually retreated but this process was interrupted by at least two readvances; the last one has been dated at ~ 12 ka. In the other hand, 36Cl result reflects a MGE age of ~ 13 ka in Mollebaya valley. Also, two samples obtained in Pujro-Huayjo and Mucurca valleys associated with MGE have an exposure age of 10-9 ka, but likely are moraine boulders affected by exhumation or erosion processes. Deglaciation in HualcaHualca volcano began abruptly ~ 11.5 ka ago according to a 36Cl age from a polished and striated bedrock in Pujro Huayjo valley, presumably as a result of reduced precipitation as well as a global increase of temperatures. The glacier evolution at HualcaHualca volcano presents a high correlation with

  16. An inventory model of purchase quantity for fully-loaded vehicles with maximum trips in consecutive transport time

    Directory of Open Access Journals (Sweden)

    Chen Pоуu

    2013-01-01

    Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.

  17. K time & maximum amplitude of thromboelastogram predict post-central venous cannulation bleeding in patients with cirrhosis: A pilot study

    Directory of Open Access Journals (Sweden)

    Chandra K Pandey

    2017-01-01

    Interpretation & conclusions: Our results show that the cut-off value for INR ≥2.6 and K time ≥3.05 min predict bleeding and MA ≥48.8 mm predicts non-bleeding in patients with cirrhosis undergoing central venous pressure catheter cannulation.

  18. The cooling time of white dwarfs produced from type Ia supernovae

    International Nuclear Information System (INIS)

    Meng Xiangcun; Yang Wuming; Li Zhongmu

    2010-01-01

    Type Ia supernovae (SNe Ia) play a key role in measuring cosmological parameters, in which the Phillips relation is adopted. However, the origin of the relation is still unclear. Several parameters are suggested, e.g. the relative content of carbon to oxygen (C/O) and the central density of the white dwarf (WD) at ignition. These parameters are mainly determined by the WD's initial mass and its cooling time, respectively. Using the progenitor model developed by Meng and Yang, we present the distributions of the initial WD mass and the cooling time. We do not find any correlation between these parameters. However, we notice that as the range of the WD's mass decreases, its average value increases with the cooling time. These results could provide a constraint when simulating the SN Ia explosion, i.e. the WDs with a high C/O ratio usually have a lower central density at ignition, while those having the highest central density at ignition generally have a lower C/O ratio. The cooling time is mainly determined by the evolutionary age of secondaries, and the scatter of the cooling time decreases with the evolutionary age. Our results may indicate that WDs with a long cooling time have more uniform properties than those with a short cooling time, which may be helpful to explain why SNe Ia in elliptical galaxies have a more uniform maximum luminosity than those in spiral galaxies. (research papers)

  19. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  20. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    Science.gov (United States)

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  1. Late-time particle emission from laser-produced graphite plasma

    Energy Technology Data Exchange (ETDEWEB)

    Harilal, S. S.; Hassanein, A.; Polek, M. [School of Nuclear Engineering, Center for Materials Under Extreme Environment, Purdue University, West Lafayette, Indiana 47907 (United States)

    2011-09-01

    We report a late-time ''fireworks-like'' particle emission from laser-produced graphite plasma during its evolution. Plasmas were produced using graphite targets excited with 1064 nm Nd: yttrium aluminum garnet (YAG) laser in vacuum. The time evolution of graphite plasma was investigated using fast gated imaging and visible emission spectroscopy. The emission dynamics of plasma is rapidly changing with time and the delayed firework-like emission from the graphite target followed a black-body curve. Our studies indicated that such firework-like emission is strongly depended on target material properties and explained due to material spallation caused by overheating the trapped gases through thermal diffusion along the layer structures of graphite.

  2. Late-time particle emission from laser-produced graphite plasma

    International Nuclear Information System (INIS)

    Harilal, S. S.; Hassanein, A.; Polek, M.

    2011-01-01

    We report a late-time ''fireworks-like'' particle emission from laser-produced graphite plasma during its evolution. Plasmas were produced using graphite targets excited with 1064 nm Nd: yttrium aluminum garnet (YAG) laser in vacuum. The time evolution of graphite plasma was investigated using fast gated imaging and visible emission spectroscopy. The emission dynamics of plasma is rapidly changing with time and the delayed firework-like emission from the graphite target followed a black-body curve. Our studies indicated that such firework-like emission is strongly depended on target material properties and explained due to material spallation caused by overheating the trapped gases through thermal diffusion along the layer structures of graphite.

  3. The relationship between the parameters (Heart rate, Ejection fraction and BMI) and the maximum enhancement time of ascending aorta

    International Nuclear Information System (INIS)

    Jang, Young Ill; June, Woon Kwan; Dong, Kyeong Rae

    2007-01-01

    In this study, Bolus Tracking method was used to investigate the parameters affecting the time when contrast media is reached at 100 HU (T 100 ) and studied the relationship between parameters and T 100 because the time which is reached at aorta through antecubital vein after injecting contrast media is different from person to person. Using 64 MDCT, Cadiac CT, the data were obtained from 100 patients (male: 50, female: 50, age distribution: 21⁓81, average age: 57.5) during July and September, 2007 by injecting the contrast media at 4 ml∙sec -1 through their antecubital vein except having difficulties in stopping their breath and having arrhythmia. Using Somatom Sensation Cardiac 64 Siemens, patients’ height and weight were measured to know their mean Heart rate and BMI. Ejection Fraction was measured using Argus Program at Wizard Workstation. Variances of each parameter were analyzed depending on T 100 ’s variation with multiple comparison and the correlation of Heart rate, Ejection Fraction and BMI were analyzed, as well. According to T 100 ’s variation caused by Heart rate, Ejection Fraction and BMI variations, the higher patients’ Heart Rate and Ejection Fraction were, the faster T 100 ’s variations caused by Heart Rate and Ejection Fraction were. The lower their Heart Rate and Ejection Fraction were, the slower T 100 ’s variations were, but T 100 ’s variations caused by BMI were not affected. In the correlation between T 100 and parameters, Heart Rate (p⁄0.01) and Ejection Fraction (p⁄0.05) were significant, but BMI was not significant (p¤0.05). In the Heart Rate, Ejection Fraction and BMI depending on Fast (17 sec and less), Medium (18⁓21 sec), Slow (22 sec and over) Heart Rate was significant at Fast and Slow and Ejection Fraction was significant Fast and Slow as well as Medium and Slow (p⁄0.05), but BMI was not statistically significant. Of the parameters (Heart Rate, Ejection Fraction and BMI) which would affect T 100 , Heart

  4. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    Directory of Open Access Journals (Sweden)

    Kodner Robin B

    2010-10-01

    Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.

  5. Features of the use of time-frequency distributions for controlling the mixture-producing aggregate

    Science.gov (United States)

    Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.

    2018-05-01

    The paper submits and argues the information on filtering properties of the mixing unit as a part of the mixture-producing aggregate. Relevant theoretical data concerning a channel transfer function of the mixing unit and multidimensional material flow signals are adduced here. Note that ordinary one-dimensional material flow signals are defined in terms of time-frequency distributions of Cohen’s class representations operating with Gabor wavelet functions. Two time-frequencies signal representations are written about in the paper to show how one can solve controlling problems as applied to mixture-producing systems: they are the so-called Rihaczek and Wigner-Ville distributions. In particular, the latter illustrates low-pass filtering properties that are practically available in any of low-pass elements of a physical system.

  6. THE MAXIMUM AMOUNTS OF RAINFALL FALLEN IN SHORT PERIODS OF TIME IN THE HILLY AREA OF CLUJ COUNTY - GENESIS, DISTRIBUTION AND PROBABILITY OF OCCURRENCE

    Directory of Open Access Journals (Sweden)

    BLAGA IRINA

    2014-03-01

    Full Text Available The maximum amounts of rainfall are usually characterized by high intensity, and their effects on the substrate are revealed, at slope level, by the deepening of the existing forms of torrential erosion and also by the formation of new ones, and by landslide processes. For the 1971-2000 period, for the weather stations in the hilly area of Cluj County: Cluj- Napoca, Dej, Huedin and Turda the highest values of rainfall amounts fallen in 24, 48 and 72 hours were analyzed and extracted, based on which the variation and the spatial and temporal distribution of the precipitation were analyzed. The annual probability of exceedance of maximum rainfall amounts fallen in short time intervals (24, 48 and 72 hours, based on thresholds and class values was determined, using climatological practices and the Hyfran program facilities.

  7. Time-resolved energy spectrum of a pseudospark-produced high-brightness electron beam

    International Nuclear Information System (INIS)

    Myers, T.J.; Ding, B.N.; Rhee, M.J.

    1992-01-01

    The pseudospark, a fast low-pressure gas discharge between a hollow cathode and a planar anode, is found to be an interesting high-brightness electron beam source. Typically, all electron beam produced in the pseudospark has the peak current of ∼1 kA, pulse duration of ∼50 ns, and effective emittance of ∼100 mm-mrad. The energy information of this electron beam, however, is least understood due to the difficulty of measuring a high-current-density beam that is partially space-charge neutralized by the background ions produced in the gas. In this paper, an experimental study of the time-resolved energy spectrum is presented. The pseudospark produced electron beam is injected into a vacuum through a small pinhole so that the electrons without background ions follow single particle motion; the beam is sent through a negative biased electrode and the only portion of beam whose energy is greater than the bias voltage can pass through the electrode and the current is measured by a Faraday cup. The Faraday cup signals with various bias voltage are recorded in a digital oscilloscope. The recorded waveforms are then numerically analyzed to construct a time-resolved energy spectrum. Preliminary results are presented

  8. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  9. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length: a myth revisited

    Directory of Open Access Journals (Sweden)

    Morten B. S. Svendsen

    2016-10-01

    Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.

  10. Larger Neural Responses Produce BOLD Signals That Begin Earlier in Time

    Directory of Open Access Journals (Sweden)

    Serena eThompson

    2014-06-01

    Full Text Available Functional MRI analyses commonly rely on the assumption that the temporal dynamics of hemodynamic response functions (HRFs are independent of the amplitude of the neural signals that give rise to them. The validity of this assumption is particularly important for techniques that use fMRI to resolve sub-second timing distinctions between responses, in order to make inferences about the ordering of neural processes. Whether or not the detailed shape of the HRF is independent of neural response amplitude remains an open question, however. We performed experiments in which we measured responses in primary visual cortex (V1 to large, contrast-reversing checkerboards at a range of contrast levels, which should produce varying amounts of neural activity. Ten subjects (ages 22-52 were studied in each of two experiments using 3 Tesla scanners. We used rapid, 250 msec, temporal sampling (repetition time, or TR and both short and long inter-stimulus interval (ISI stimulus presentations. We tested for a systematic relationship between the onset of the HRF and its amplitude across conditions, and found a strong negative correlation between the two measures when stimuli were separated in time (long- and medium-ISI experiments, but not the short-ISI experiment. Thus, stimuli that produce larger neural responses, as indexed by HRF amplitude, also produced HRFs with shorter onsets. The relationship between amplitude and latency was strongest in voxels with lowest mean-normalized variance (i.e., parenchymal voxels. The onset differences observed in the longer-ISI experiments are likely attributable to mechanisms of neurovascular coupling, since they are substantially larger than reported differences in the onset of action potentials in V1 as a function of response amplitude.

  11. A maximum entropy method to compute the 13NH3 pulmonary transit time from right to left ventricle in cardiac PET studies

    DEFF Research Database (Denmark)

    Steenstrup, Stig; Hove, Jens D; Kofoed, Klaus

    2002-01-01

    The distribution function of pulmonary transit times (fPTTs) contains information on the transit time of blood through the lungs and the dispersion in transit times. Most of the previous studies have used specific functional forms with adjustable parameters to characterize the fPTT. It is the pur......, we were able to accurately identify a two-peaked transfer function, which may theoretically be seen in patients with pulmonary disease confined to one lung. Transit time values for [13N]-ammonia were produced by applying the algorithm to PET studies from normal volunteers....

  12. Producing Coordinate Time Series for Iraq's CORS Site for Detection Geophysical Phenomena

    Directory of Open Access Journals (Sweden)

    Oday Yaseen Mohamed Zeki Alhamadani

    2018-01-01

    Full Text Available Global Navigation Satellite Systems (GNSS have become an integral part of wide range of applications. One of these applications of GNSS is implementation of the cellular phone to locate the position of users and this technology has been employed in social media applications. Moreover, GNSS have been effectively employed in transportation, GIS, mobile satellite communications, and etc. On the other hand, the geomatics sciences use the GNSS for many practical and scientific applications such as surveying and mapping and monitoring, etc. In this study, the GNSS raw data of ISER CORS, which is located in the North of Iraq, are processed and analyzed to build up coordinate time series for the purpose of detection the Arabian tectonic plate motion over seven years and a half. Such coordinates time series have been produced very efficiently using GNSS Precise Point Positioning (PPP. The daily PPP results were processed, analyzed, and presented as coordinate time series using GPS Interactive Time Series Analysis. Furthermore, MATLAB (V.2013a is used in this study to computerize GITSA with Graphic User Interface (GUI. The objective of this study was to investigate both of the homogeneity and consistency of the Iraq CORSs GNSS raw data for detection any geophysical changes over long period of time. Additionally, this study aims to employ free online PPP services, such as CSRS_PPP software, for processing GNSS raw data for generation GNSS coordinate time series. The coordinate time series of ISER station showed a +20.9 mm per year, +27.2 mm per year, and -11.3 mm per year in the East, North, and up-down components, respectively. These findings showed a remarkable similarity with those obtained by long-term monitoring of Earth's crust deformation and movement based on global studies and this highlights the importance of using GNSS for monitoring the movement of tectonic plate motion based on CORS and online GNSS data processing services over long period of

  13. Time-resolved spectroscopy of nonequilibrium ionization in laser-produced plasmas

    International Nuclear Information System (INIS)

    Marjoribanks, R.S.

    1988-01-01

    The highly transient ionization characteristic of laser-produced plasmas at high energy densities has been investigated experimentally, using x-ray spectroscopy with time resolution of less than 20 ps. Spectroscopic diagnostics of plasma density and temperature were used, including line ratios, line profile broadening and continuum emission, to characterize the plasma conditions without relying immediately on ionization modeling. The experimentally measured plasma parameters were used as independent variables, driving an ionization code, as a test of ionization modeling, divorced from hydrodynamic calculations. Several state-of-the-art streak spectrographs, each recording a fiducial of the laser peak along with the time-resolved spectrum, characterized the laser heating of thin signature layers of different atomic numbers imbedded in plastic targets. A novel design of crystal spectrograph, with a conically curved crystal, was developed. Coupled with a streak camera, it provided high resolution (λ/ΔΛ > 1000) and a collection efficiency roughly 20-50 times that of planar crystal spectrographs, affording improved spectra for quantitative reduction and greater sensitivity for the diagnosis of weak emitters. Experimental results were compared to hydrocode and ionization code simulations, with poor agreement. The conclusions question the appropriateness of describing electron velocity distributions by a temperature parameter during the time of laser illumination and emphasis the importance of characterizing the distribution more generally

  14. Simultaneous measurement of the maximum oscillation amplitude and the transient decay time constant of the QCM reveals stiffness changes of the adlayer.

    Science.gov (United States)

    Marxer, C Galli; Coen, M Collaud; Bissig, H; Greber, U F; Schlapbach, L

    2003-10-01

    Interpretation of adsorption kinetics measured with a quartz crystal microbalance (QCM) can be difficult for adlayers undergoing modification of their mechanical properties. We have studied the behavior of the oscillation amplitude, A(0), and the decay time constant, tau, of quartz during adsorption of proteins and cells, by use of a home-made QCM. We are able to measure simultaneously the frequency, f, the dissipation factor, D, the maximum amplitude, A(0), and the transient decay time constant, tau, every 300 ms in liquid, gaseous, or vacuum environments. This analysis enables adsorption and modification of liquid/mass properties to be distinguished. Moreover the surface coverage and the stiffness of the adlayer can be estimated. These improvements promise to increase the appeal of QCM methodology for any applications measuring intimate contact of a dynamic material with a solid surface.

  15. A simulation study of Linsley's approach to infer elongation rate and fluctuations of the EAS maximum depth from muon arrival time distributions

    International Nuclear Information System (INIS)

    Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.

    1999-01-01

    The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle

  16. Reduction of time for producing and acclimatizing two bamboo species in a greenhouse

    Directory of Open Access Journals (Sweden)

    Giovanni Aquino Gasparetto

    2013-03-01

    Full Text Available China has been investing in bamboo cultivation in Brazilian lands. However, there’s a significant deficit of seedling production for civil construction and the charcoal and cellulose sectors, something which compromises a part of the forestry sector. In order to contribute so that the bamboo production chain solves this problem, this study aimed to check whether the application of indole acetic acid (IAA could promote plant growth in a shorter cultivation time. In the study, Bambusa vulgaris and B. vulgaris var. vitatta stakes underwent two treatments (0.25% and 5.0% of IAA and they were grown on washed sand in a greenhouse. Number of leaves, stem growth, rooting, and chlorophyll content were investigated. There was no difference with regard to stem growth, root length, and number of leaves for both species in the two treatments (0.25% and 5% IAA. The chlorophyll content variation between the two species may constitute a quality parameter of forest seedling when compared to other bamboo species. After 43 days, the seedlings are ready for planting in areas of full sun. For the species studied here, the average time to the seedling sale is from 4 to 6 months, with no addition of auxin. Using this simple and low cost technique, several nurserymen will produce bamboo seedlings with reduced time, costs, and manpower.

  17. Time-resolved soft x-ray spectra from laser-produced Cu plasma

    International Nuclear Information System (INIS)

    Cone, K.V.; Dunn, J.; Baldis, H.A.; May, M.J.; Purvis, M.A.; Scott, H.A.; Schneider, M.B.

    2012-01-01

    The volumetric heating of a thin copper target has been studied with time resolved x-ray spectroscopy. The copper target was heated from a plasma produced using the Lawrence Livermore National Laboratory's Compact Multipulse Terrawatt (COMET) laser. A variable spaced grating spectrometer coupled to an x-ray streak camera measured soft x-ray emission (800-1550 eV) from the back of the copper target to characterize the bulk heating of the target. Radiation hydrodynamic simulations were modeled in 2-dimensions using the HYDRA code. The target conditions calculated by HYDRA were post-processed with the atomic kinetics code CRETIN to generate synthetic emission spectra. A comparison between the experimental and simulated spectra indicates the presence of specific ionization states of copper and the corresponding electron temperatures and ion densities throughout the laser-heated copper target.

  18. Time-resolved probing of electron thermal conduction in femtosecond-laser-pulse-produced plasmas

    International Nuclear Information System (INIS)

    Vue, B.T.V.

    1993-06-01

    We present time-resolved measurements of reflectivity, transmissivity and frequency shifts of probe light interacting with the rear of a disk-like plasma produced by irradiation of a transparent solid target with 0.1ps FWHM laser pulses at peak intensity 5 x 10 l4 W/CM 2 . Experimental results show a large increase in reflection, revealing rapid formation of a steep gradient and overdense surface plasma layer during the first picosecond after irradiation. Frequency shifts due to a moving ionization created by thermal conduction into the solid target are recorded. Calculations using a nonlinear thermal heat wave model show good agreement with the measured frequency shifts, further confining the strong thermal transport effect

  19. Quantitative Real-time PCR detection of putrescine-producing Gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Kristýna Maršálková

    2017-01-01

    Full Text Available Biogenic amines are indispensable components of living cells; nevertheless these compounds could be toxic for human health in higher concentrations. Putrescine is supposed to be the major biogenic amine associated with microbial food spoilage. Development of reliable, fast and culture-independent molecular methods to detect bacteria producing biogenic amines deserves the attention, especially of the food industry in purpose to protect health. The objective of this study was to verify the newly designed primer sets for detection of two inducible genes adiA and speF together in Salmonella enterica and Escherichia coli genome by Real-time PCR. These forenamed genes encode enzymes in the metabolic pathway which leads to production of putrescine in Gram-negative bacteria. Moreover, relative expression of these genes was studied in E. coli CCM 3954 strain using Real-time PCR. In this study, sets of new primers for the detection two inducible genes (speF and adiA in Salmonella enterica and E. coli by Real-time PCR were designed and tested. Amplification efficiency of a Real-time PCR was calculated from the slope of the standard curves (adiA, speF, gapA. An efficiency in a range from 95 to 105 % for all tested reactions was achieved. The gene expression (R of adiA and speF genes in E. coli was varied depending on culture conditions. The highest gene expression of adiA and speF was observed at 6, 24 and 36 h (RadiA ~ 3, 5, 9; RspeF ~11, 10, 9; respectively after initiation of growth of this bacteria in nutrient broth medium enchired with amino acids. The results show that these primers could be used for relative quantification analysis of E. coli.

  20. Non-invasive rapid harvest time determination of oil-producing microalgae cultivations for bio-diesel production by using Chlorophyll fluorescence

    Directory of Open Access Journals (Sweden)

    Yaqin eQiao

    2015-10-01

    Full Text Available For the large-scale cultivation of microalgae for biodiesel production, one of the key problems is the determination of the optimum time for algal harvest when algae cells are saturated with neutral lipids. In this study, a method to determine the optimum harvest time in oil-producing microalgal cultivations by measuring the maximum photochemical efficiency of photosystem II (PSII, also called Fv/Fm, was established. When oil-producing Chlorella strains were cultivated and then treated with nitrogen starvation, it not only stimulated neutral lipid accumulation, but also affected the photosynthesis system, with the neutral lipid contents in all four algae strains – Chlorella sorokiniana C1, Chlorella sp. C2, C. sorokiniana C3, C. sorokiniana C7 – correlating negatively with the Fv/Fm values. Thus, for the given oil-producing algae, in which a significant relationship between the neutral lipid content and Fv/Fm value under nutrient stress can be established, the optimum harvest time can be determined by measuring the value of Fv/Fm. It is hoped that this method can provide an efficient way to determine the harvest time rapidly and expediently in large-scale oil-producing microalgae cultivations for biodiesel production.

  1. Evaluation of adaptation to visually induced motion sickness based on the maximum cross-correlation between pulse transmission time and heart rate

    Directory of Open Access Journals (Sweden)

    Chiba Shigeru

    2007-09-01

    Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.

  2. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length

    DEFF Research Database (Denmark)

    Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano

    2016-01-01

    Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...

  3. 24 CFR 203.18c - One-time or up-front mortgage insurance premium excluded from limitations on maximum mortgage...

    Science.gov (United States)

    2010-04-01

    ... insurance premium excluded from limitations on maximum mortgage amounts. 203.18c Section 203.18c Housing and...-front mortgage insurance premium excluded from limitations on maximum mortgage amounts. After... LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE...

  4. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  5. Contribution of National near Real Time MODIS Forest Maximum Percentage NDVI Change Products to the U.S. ForWarn System

    Science.gov (United States)

    Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.

    2012-01-01

    This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product

  6. Time required to achieve maximum concentration of amikacin in synovial fluid of the distal interphalangeal joint after intravenous regional limb perfusion in horses.

    Science.gov (United States)

    Kilcoyne, Isabelle; Nieto, Jorge E; Knych, Heather K; Dechant, Julie E

    2018-03-01

    OBJECTIVE To determine the maximum concentration (Cmax) of amikacin and time to Cmax (Tmax) in the distal interphalangeal (DIP) joint in horses after IV regional limb perfusion (IVRLP) by use of the cephalic vein. ANIMALS 9 adult horses. PROCEDURES Horses were sedated and restrained in a standing position and then subjected to IVRLP (2 g of amikacin sulfate diluted to 60 mL with saline [0.9% NaCl] solution) by use of the cephalic vein. A pneumatic tourniquet was placed 10 cm proximal to the accessory carpal bone. Perfusate was instilled with a peristaltic pump over a 3-minute period. Synovial fluid was collected from the DIP joint 5, 10, 15, 20, 25, and 30 minutes after IVRLP; the tourniquet was removed after the 20-minute sample was collected. Blood samples were collected from the jugular vein 5, 10, 15, 19, 21, 25, and 30 minutes after IVRLP. Amikacin was quantified with a fluorescence polarization immunoassay. Median Cmax of amikacin and Tmax in the DIP joint were determined. RESULTS 2 horses were excluded because an insufficient volume of synovial fluid was collected. Median Cmax for the DIP joint was 600 μg/mL (range, 37 to 2,420 μg/mL). Median Tmax for the DIP joint was 15 minutes. CONCLUSIONS AND CLINICAL RELEVANCE Tmax of amikacin was 15 minutes after IVRLP in horses and Cmax did not increase > 15 minutes after IVRLP despite maintenance of the tourniquet. Application of a tourniquet for 15 minutes should be sufficient for completion of IVRLP when attempting to achieve an adequate concentration of amikacin in the synovial fluid of the DIP joint.

  7. Incidental colonic focal FDG uptake on PET/CT: can the maximum standardized uptake value (SUVmax) guide us in the timing of colonoscopy?

    NARCIS (Netherlands)

    van Hoeij, F. B.; Keijsers, R. G. M.; Loffeld, B. C. A. J.; Dun, G.; Stadhouders, P. H. G. M.; Weusten, B. L. A. M.

    2015-01-01

    In patients undergoing F-18-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUVmax) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from

  8. The dynamics of suspended particulate matter (SPM) and chlorophyll- a from intratidal to annual time scales in a coastal turbidity maximum

    NARCIS (Netherlands)

    van der Hout, C.M.; Witbaard, R.; Bergman, M.J.N.; Duineveld, G.C.A.; Rozemeijer, M.J.C.; Gerkema, T.

    2017-01-01

    The analysis of 1.8 years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0–2 m) in the shallow (11

  9. The dynamics of suspended particulate matter (SPM) and chlorophyll-a from intratidal to annual time scales in a coastal turbidity maximum

    NARCIS (Netherlands)

    Hout, van der C.M.; Witbaard, R.; Bergman, M.J.N.; Duineveld, G.C.A.; Rozemeijer, M.J.C.; Gerkema, T.

    2017-01-01

    The analysis of 1.8. years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0-2. m) in the shallow

  10. Chronic ethanol exposure produces time- and brain region-dependent changes in gene coexpression networks.

    Directory of Open Access Journals (Sweden)

    Elizabeth A Osterndorff-Kahanek

    Full Text Available Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY, nucleus accumbens (NAC, prefrontal cortex (PFC, and liver after four weekly cycles of chronic intermittent ethanol (CIE vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000 at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600. Within each region, there was little gene overlap across time (~20%. All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global 'rewiring' of coexpression systems involving glial and immune signaling as well as neuronal genes.

  11. Time-to-ejaculation and the quality of semen produced by masturbation at a clinic.

    Science.gov (United States)

    Elzanaty, Saad

    2008-05-01

    To investigate the association between the length of time-to-ejaculation and semen parameters. Ejaculates from 142 men under infertility assessment were analyzed according to the World Health Organization guidelines. Seminal neutral alpha-glucosidase (NAG), prostate-specific antigen (PSA), zinc, and fructose were also measured. Three groups according to the length of the time-to-ejaculation were defined: G(15) (greater than 15 minutes). Time to ejaculation showed negative significant correlation with sperm concentration (rho = -0.20, P = 0.02), total sperm count (rho = -0.20, P = 0.04), NAG (rho = -0.20, P = 0.01), and fructose (rho = -0.30, P = 0.02), respectively. No significant correlations existed among the time-to-ejaculation and age, sexual abstinence, semen volume, sperm motility, PSA, and zinc. There were negative significant associations among time-to-ejaculation and sperm concentration (beta = -3.0; P = 0.004), total sperm count (beta = -10; P = 0.02), total count of progressive motility (beta = -7.0; P = 0.02), and fructose (beta = -0.30; P = 0.02), respectively. No significant associations existed among the time-to-ejaculation and semen volume, motility grades, NAG, PSA, and zinc. G(15) (mean difference = 50 x 10(6)/mL; P = 0.01), (mean difference = 176 x 10(6)/ejaculate; P = 0.02), (mean difference = 110 x 10(6)/ejaculate; P = 0.03), respectively. Fructose was significantly higher in G(15) (mean difference = 5.0 mmol/L; P = 0.03). The time-to-ejaculation length was associated with semen parameters. These results might reflect the negative effect of acute stress during semen collection via masturbation at a clinic on semen parameters.

  12. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  13. Striatal lesions produce distinctive impairments in reaction time performance in two different operant chambers.

    Science.gov (United States)

    Brasted, P J; Döbrössy, M D; Robbins, T W; Dunnett, S B

    1998-08-01

    The dorsal striatum plays a crucial role in mediating voluntary movement. Excitotoxic striatal lesions in rats have previously been shown to impair the initiation but not the execution of movement in a choice reaction time task in an automated lateralised nose-poke apparatus (the "nine-hole box"). Conversely, when a conceptually similar reaction time task has been applied in a conventional operant chamber (or "Skinner box"), striatal lesions have been seen to impair the execution rather than the initiation of the lateralised movement. The present study was undertaken to compare directly these two results by training the same group of rats to perform a choice reaction time task in the two chambers and then comparing the effects of a unilateral excitotoxic striatal lesion in both chambers in parallel. Particular attention was paid to adopting similar parameters and contingencies in the control of the task in the two test chambers. After striatal lesions, the rats showed predominantly contralateral impairments in both tasks. However, they showed a deficit in reaction time in the nine-hole box but an apparent deficit in response execution in the Skinner box. This finding confirms the previous studies and indicates that differences in outcome are not simply attributable to procedural differences in the lesions, training conditions or tasks parameters. Rather, the pattern of reaction time deficit after striatal lesions depends critically on the apparatus used and the precise response requirements for each task.

  14. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  15. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  16. Time-resolved x-ray line diagnostics of laser-produced plasmas

    International Nuclear Information System (INIS)

    Kauffman, R.L.; Matthews, D.L.; Kilkenny, J.D.; Lee, R.W.

    1982-11-01

    We have examined the underdense plasma conditions of laser irradiated disks using K x-rays from highly ionized ions. A 900 ps laser pulse of 0.532 μm light is used to irradiate various Z disks which have been doped with low concentrations of tracer materials. The tracers, whose Z's range from 13 to 22, are chosen so that their K x-ray spectrum is sensitive to typical underdense plasma temperatures and densities. Spectra are measured using a time-resolved crystal spectrograph recording the time history of the x-ray spectrum. A spatially-resolved, time-integrated crystal spectrograph also monitors the x-ray lines. Large differences in Al spectra are observed when the host plasms is changed from SiO 2 to PbO or In. Spectra will be presented along with preliminary analysis of the data

  17. Time-resolved x-ray line diagnostics of laser-produced plasmas

    International Nuclear Information System (INIS)

    Kauffman, R.L.; Matthews, D.L.; Kilkenny, J.D.; Lee, R.W.

    1982-01-01

    We have examined the underdense plasma conditions of laser irradiated disks using K x-rays from highly ionized ions. A 900 ps laser pulse of 0.532 μm light is used to irradiate various Z disks which have been doped with low concentrations of tracer materials. The tracers whose Z's range from 13 to 22 are chosen so that their K x-ray spectrum is sensitive to typical underdense plasma temperatures and densities. Spectra are measured using a time-resolved crystal spectrograph recording the time history of the x-ray spectrum. A spatially-resolved, time-integrated crystal spectrograph also monitors the x-ray lines. Large differences in Al spectra are observed when the host plasma is changed from SiO 2 to PbO or In. Spectra will be presented along with preliminary analysis of the data

  18. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  19. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  20. Microbiological, nutritional and sensory evaluation of long-time stored amaranth biscuits produced from irradiated-treated amaranth grains

    International Nuclear Information System (INIS)

    Hozová, B.; Buchtová, V.; Dodok, L.

    2000-01-01

    The paper presents some results achieved by the evaluation of microbiological (total bacterial count, coliform bacteria, aerobic sporeforming bacteria, yeasts and moulds( nutritional (lysine) and sensory (shape, surface, colour consistency, taste, odour, the profiling of tastiness) quality and of the aw values of amaranth-based biscuits produced from the amaranth grain irradiated by various ionizing radiation doses (1.5, 3 and 5 kGy, source 60 Co) and stored for the period of 12 months at the laboratory temperature (20–25°C). The irradiation dose providing the biscuits maximum hygienic, nutritional and sensory quality maintained up to the end of the one-year storage was 5 kGy

  1. WINTOF - A program to produce neutron spectra from Zebra time-of-flight experiments

    International Nuclear Information System (INIS)

    Marshall, J.

    1969-06-01

    This report describes a computer program, written for the Winfrith KDF9 computer, which is used to calculate the neutron energy spectrum in the Zebra reactor from neutron time-of-flight measurements using the Zebra Linac. The data requirements for the program are specified and an illustration of the final spectrum is included. (author)

  2. Maternal alcohol consumption producing fetal alcohol spectrum disorders (FASD): quantity, frequency, and timing of drinking.

    Science.gov (United States)

    May, Philip A; Blankenship, Jason; Marais, Anna-Susan; Gossage, J Phillip; Kalberg, Wendy O; Joubert, Belinda; Cloete, Marise; Barnard, Ronel; De Vries, Marlene; Hasken, Julie; Robinson, Luther K; Adnams, Colleen M; Buckley, David; Manning, Melanie; Parry, Charles D H; Hoyme, H Eugene; Tabachnick, Barbara; Seedat, Soraya

    2013-12-01

    Concise, accurate measures of maternal prenatal alcohol use are needed to better understand fetal alcohol spectrum disorders (FASD). Measures of drinking by mothers of children with specific FASD diagnoses and mothers of randomly-selected controls are compared and also correlated with physical and cognitive/behavioral outcomes. Measures of maternal alcohol use can differentiate maternal drinking associated with FASD from that of controls and some from mothers of alcohol-exposed normals. Six variables that combine quantity and frequency concepts distinguish mothers of FASD children from normal controls. Alcohol use variables, when applied to each trimester and three months prior to pregnancy, provide insight on critical timing of exposure as well. Measures of drinking, especially bingeing, correlate significantly with increased child dysmorphology and negative cognitive/behavioral outcomes in children, especially low non-verbal IQ, poor attention, and behavioral problems. Logistic regression links (p<.001) first trimester drinking (vs. no drinking) with FASD, elevating FASD likelihood 12 times; first and second trimester drinking increases FASD outcomes 61 times; and drinking in all trimesters 65 times. Conversely, a similar regression (p=.008) indicates that drinking only in the first trimester makes the birth of a child with an FASD 5 times less likely than drinking in all trimesters. There is significant variation in alcohol consumption both within and between diagnostic groupings of mothers bearing children diagnosed within the FASD continuum. Drinking measures are empirically identified and correlated with specific child outcomes. Alcohol use, especially heavy use, should be avoided throughout pregnancy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  4. Simultaneously time- and space-resolved spectroscopic characterization of laser-produced plasmas

    International Nuclear Information System (INIS)

    Charatis, G.; Young, B.K.F.; Busch, G.E.

    1988-01-01

    The CHROMA laser facility at KMS Fusion has been used to irradiate a variety of microdot targets. These include aluminum dots and mixed bromine dots doped with K-shell (magnesium) emitters. Simultaneously time- and space-resolved K-shell and L-shell spectra have been measured and compared to dynamic model predictions. The electron density profiles are measured using holographic interferometry. Temperatures, densities, and ionization distributions are determined using K-shell and L-shell spectral techniques. Time and spatial gradients are resolved simultaneously using three diagnostics: a framing crystal x-ray spectrometer, an x-ray streaked crystal spectrometer with a spatial imaging slit, and a 4-frame holographic interferometer. Significant differences have been found between the interferometric and the model-dependent spectral measurements of plasma density. Predictions by new non-stationary L-shell models currently being developed are also presented. 14 refs., 10 figs

  5. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  6. Tempo máximo de fonação de crianças pré-escolares Maximum phonation time in pre-school children

    Directory of Open Access Journals (Sweden)

    Carla Aparecida Cielo

    2008-08-01

    Full Text Available Pesquisas sobre o tempo máximo de fonação (TMF em crianças obtiveram diferentes resultados, constatando que tal medida pode refletir o controle neuromuscular e aerodinâmico da produção vocal, podendo ser utilizada como indicador para outras formas de avaliação, tanto qualitativas quanto objetivas. OBJETIVO: Verificar as medidas de TMF de 23 crianças pré-escolares, com idades entre quatro e seis anos e oito meses. MATERIAL E MÉTODO: O processo de amostragem contou com questionário enviado aos pais, triagem auditiva e avaliação perceptivo-auditiva vocal, por meio da escala RASAT. A coleta de dados constou dos TMF. DESENHO DO ESTUDO: Prospectivo de corte transversal. RESULTADOS: Os TMF /a/, /s/ e /z/ médios foram 7,42s, 6,35s e 7,19s; os TMF /a/ aos seis anos, foram significativamente maiores do que aos quatro anos; à medida que a idade aumentou, todos os TMF também aumentaram; e a relação s/z para todas as idades foi próxima de um. CONCLUSÕES: Os valores de TMF mostraram-se superiores aos verificados em pesquisas nacionais e inferiores aos verificados em pesquisa internacionais. Além disso, pode-se concluir que as faixas etárias analisadas no presente estudo encontram-se num período de maturação nervosa e muscular, sendo a imaturidade mais evidente na faixa etária dos quatro anos.Past studies on the maximum phonation time (MPT in children have shown different results in duration. This factor may reflect the neuromuscular and aerodynamic control of phonation in patients; such control might be used as an indicator of other evaluation methods on a qualitative and quantitative basis. AIM: to verify measures of MPT and voice acoustic characteristics in 23 children aged four to six year and eight months. METHOD: The sampling process comprised a questionnaire that was sent to parents, followed by auditory screening and a voice perceptive-auditory assessment based on the R.A.S.A.T. scale. Data collection included the MPT. STUDY

  7. Use of Real Time Satellite Infrared and Ocean Color to Produce Ocean Products

    Science.gov (United States)

    Roffer, M. A.; Muller-Karger, F. E.; Westhaver, D.; Gawlikowski, G.; Upton, M.; Hall, C.

    2014-12-01

    Real-time data products derived from infrared and ocean color satellites are useful for several types of users around the world. Highly relevant applications include recreational and commercial fisheries, commercial towing vessel and other maritime and navigation operations, and other scientific and applied marine research. Uses of the data include developing sampling strategies for research programs, tracking of water masses and ocean fronts, optimizing ship routes, evaluating water quality conditions (coastal, estuarine, oceanic), and developing fisheries and essential fish habitat indices. Important considerations for users are data access and delivery mechanisms, and data formats. At this time, the data are being generated in formats increasingly available on mobile computing platforms, and are delivered through popular interfaces including social media (Facebook, Linkedin, Twitter and others), Google Earth and other online Geographical Information Systems, or are simply distributed via subscription by email. We review 30 years of applications and describe how we develop customized products and delivery mechanisms working directly with users. We review benefits and issues of access to government databases (NOAA, NASA, ESA), standard data products, and the conversion to tailored products for our users. We discuss advantages of different product formats and of the platforms used to display and to manipulate the data.

  8. Space-time resolved measurements of spontaneous magnetic fields in laser-produced plasma

    Czech Academy of Sciences Publication Activity Database

    Pisarczyk, T.; Gus’kov, S.Yu.; Dudžák, Roman; Chodukowski, T.; Dostál, Jan; Demchenko, N. N.; Korneev, Ph.; Kalinowska, Z.; Kalal, M.; Renner, Oldřich; Šmíd, Michal; Borodziuk, S.; Krouský, Eduard; Ullschmied, Jiří; Hřebíček, Jan; Medřík, Tomáš; Golasowski, Jiří; Pfeifer, Miroslav; Skála, Jiří; Pisarczyk, P.

    2015-01-01

    Roč. 22, č. 10 (2015), č. článku 102706. ISSN 1070-664X R&D Projects: GA MŠk LM2010014; GA MŠk(CZ) LD14089; GA ČR GPP205/11/P712 Grant - others:FP7(XE) 284464 Program:FP7 Institutional support: RVO:61389021 ; RVO:68378271 Keywords : space-time resolved spontaneous magnetic field (SMF) * Laser System Subject RIV: BL - Plasma and Gas Discharge Physics; BL - Plasma and Gas Discharge Physics (FZU-D) OBOR OECD: Fluids and plasma physics (including surface physics); Fluids and plasma physics (including surface physics) (FZU-D) Impact factor: 2.207, year: 2015 http://scitation.aip.org/content/aip/journal/pop/22/10/10.1063/1.4933364

  9. Sustained visual-spatial attention produces costs and benefits in response time and evoked neural activity.

    Science.gov (United States)

    Mangun, G R; Buck, L A

    1998-03-01

    This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.

  10. The Influence of Variation in Time and HCl Concentration to the Glucose Produced from Kepok Banana

    Science.gov (United States)

    Widodo M, Rohman; Noviyanto, Denny; RM, Faisal

    2016-01-01

    Kepok banana (Musa paradisiaca) is a plant that has many advantagesfrom its fruit, stems, leaves, flowers and cob. However, we just tend to take benefit from the fruit. We grow and harvest the fruit without taking advantages from other parts. So they would be a waste or detrimental to animal nest if not used. The idea to take the benefit from the banana crop yields, especially cob is rarely explored. This study is an introduction to the use of banana weevil especially from the glucose it contains. This study uses current methods of hydrolysis using HCl as a catalyst with the concentration variation of 0.4 N, 0.6 N and 0.8 N and hydrolysis times variation of 20 minutes, 25 minutes and 30 minutes. The stages in the hydrolysis include preparation of materials, the process of hydrolysis and analysis of test results using Fehling and titrate with standard glucose solution. HCl is used as a catalyst because it is cheaper than the enzyme that has the same function. NaOH 60% is used for neutralizing the pH of the filtrate result of hydrolysis. From the results of analysis, known thatthe biggest yield of glucose is at concentration 0.8 N and at 30 minutes reaction, it contains 6.25 gram glucose / 20 gram dry sampel, and the convertion is 27.22% at 20 gram dry sampel.

  11. MATERNAL ALCOHOL CONSUMPTION PRODUCING FETAL ALCOHOL SPECTRUM DISORDERS (FASD): QUANTITY, FREQUENCY, AND TIMING OF DRINKING

    Science.gov (United States)

    May, Philip A.; Blankenship, Jason; Marais, Anna-Susan; Gossage, J. Phillip; Kalberg, Wendy O.; Joubert, Belinda; Cloete, Marise; Barnard, Ronel; De Vries, Marlene; Hasken, Julie; Robinson, Luther K.; Adnams, Colleen M.; Buckley, David; Manning, Melanie; Parry, Charles; Hoyme, H. Eugene; Tabachnick, Barbara; Seedat, Soraya

    2013-01-01

    Background Concise, accurate measures of maternal prenatal alcohol use are needed to better understand fetal alcohol spectrum disorders (FASD). Methods Measures of drinking by mothers of children with specific FASD diagnoses and mothers of randomly-selected controls are compared and also correlated with physical and cognitive/behavioral outcomes. Results Measures of maternal alcohol use can differentiate maternal drinking associated with FASD from that of controls and some from mothers of alcohol-exposed normals. Six variables that combine quantity and frequency concepts distinguish mothers of FASD children from normal controls. Alcohol use variables, when applied to each trimester and three months prior to pregnancy, provide insight on critical timing of exposure as well. Measures of drinking, especially bingeing, correlate significantly with increased child dysmorphology and negative cognitive/behavioral outcomes in children, especially low non-verbal IQ, poor attention, and behavioral problems. Logistic regression links (palcohol consumption both within and between diagnostic groupings of mothers bearing children diagnosed within the FASD continuum. Drinking measures are empirically identified and correlated with specific child outcomes. Alcohol use, especially heavy use, should be avoided throughout pregnancy. PMID:23932841

  12. Mid-ocean ridges produced thicker crust in the Jurassic than in Recent times

    Science.gov (United States)

    Van Avendonk, H. J.; Harding, J.; Davis, J. K.; Lawver, L. A.

    2016-12-01

    We present a compilation of published marine seismic refraction data to show that oceanic crust was 1.7 km thicker on average in the mid-Jurassic (170 Ma) than along the present-day mid-ocean ridge system. Plate reconstructions in a fixed hotspot framework show that the thickness of oceanic crust does not correlate with proximity to mantle hotspots, so it is likely that mid-plate volcanism is not the cause of this global trend. We propose that more melt was extracted from the upper mantle beneath mid-ocean ridges in the Jurassic than in recent times. Numerical studies show that temperature increase of 1 degree C in the mantle can lead to approximately 50-70 m thicker crust, so the upper mantle may have cooled 15-20 degrees C/100 Myr since 170 Ma. This average temperature decrease is larger than the secular cooling rate of the Earth's mantle, which is roughly 10 degrees C/100 Myr since the Archean. Apparently, the present-day configuration and dynamics of continental and oceanic plates removes heat more efficiently from the Earth's mantle than in its earlier history. The increase of ocean crustal thickness with plate age is also stronger in the Indian and Atlantic oceans than in the Pacific Ocean basin. This confirms that thermal insulation by the supercontinent Pangaea raised the temperature of the underlying asthenospheric mantle, which in turn led to more magmatic output at the Jurassic mid-ocean ridges of the Indian and Atlantic oceans.

  13. The dynamics of suspended particulate matter (SPM) and chlorophyll-a from intratidal to annual time scales in a coastal turbidity maximum

    Science.gov (United States)

    van der Hout, C. M.; Witbaard, R.; Bergman, M. J. N.; Duineveld, G. C. A.; Rozemeijer, M. J. C.; Gerkema, T.

    2017-09-01

    The analysis of 1.8 years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0-2 m) in the shallow (11 m deep) coastal zone at 1 km off the Dutch coast are shown. Temporal variations in the concentration of both parameters are found on tidal and seasonal scales, and a marked response to episodic events (e.g. storms). The seasonal cycle in the near-bed CHL-a concentration is determined by the spring bloom. The role of the wave climate as the primary forcing in the SPM seasonal cycle is discussed. The tidal current provides a background signal, generated predominantly by local resuspension and settling and a minor role is for advection in the cross-shore and the alongshore direction. We tested the logarithmic Rouse profile to the vertical profiles of both the SPM and the CHL-a data, with respectively 84% and only 2% success. The resulting large percentage of low Rouse numbers for the SPM profiles suggest a mixed suspension is dominant in the TMZ, i.e. surface SPM concentrations are in the same order of magnitude as near-bed concentrations.

  14. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  15. Multi-Train Energy Saving for Maximum Usage of Regenerative Energy by Dwell Time Optimization in Urban Rail Transit Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Lin

    2016-03-01

    Full Text Available With its large capacity, the total urban rail transit energy consumption is very high; thus, energy saving operations are quite meaningful. The effective use of regenerative braking energy is the mainstream method for improving the efficiency of energy saving. This paper examines the optimization of train dwell time and builds a multiple train operation model for energy conservation of a power supply system. By changing the dwell time, the braking energy can be absorbed and utilized by other traction trains as efficiently as possible. The application of genetic algorithms is proposed for the optimization, based on the current schedule. Next, to validate the correctness and effectiveness of the optimization, a real case is studied. Actual data from the Beijing subway Yizhuang Line are employed to perform the simulation, and the results indicate that the optimization method of the dwell time is effective.

  16. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  17. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  18. An increased rectal maximum tolerable volume and long anal canal are associated with poor short-term response to biofeedback therapy for patients with anismus with decreased bowel frequency and normal colonic transit time.

    Science.gov (United States)

    Rhee, P L; Choi, M S; Kim, Y H; Son, H J; Kim, J J; Koh, K C; Paik, S W; Rhee, J C; Choi, K W

    2000-10-01

    Biofeedback is an effective therapy for a majority of patients with anismus. However, a significant proportion of patients still failed to respond to biofeedback, and little has been known about the factors that predict response to biofeedback. We evaluated the factors associated with poor response to biofeedback. Biofeedback therapy was offered to 45 patients with anismus with decreased bowel frequency (less than three times per week) and normal colonic transit time. Any differences in demographics, symptoms, and parameters of anorectal physiologic tests were sought between responders (in whom bowel frequency increased up to three times or more per week after biofeedback) and nonresponders (in whom bowel frequency remained less than three times per week). Thirty-one patients (68.9 percent) responded to biofeedback and 14 patients (31.1 percent) did not. Anal canal length was longer in nonresponders than in responders (4.53 +/- 0.5 vs. 4.08 +/- 0.56 cm; P = 0.02), and rectal maximum tolerable volume was larger in nonresponders than in responders. (361 +/- 87 vs. 302 +/- 69 ml; P = 0.02). Anal canal length and rectal maximum tolerable volume showed significant differences between responders and nonresponders on multivariate analysis (P = 0.027 and P = 0.034, respectively). This study showed that a long anal canal and increased rectal maximum tolerable volume are associated with poor short-term response to biofeedback for patients with anismus with decreased bowel frequency and normal colonic transit time.

  19. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  20. Establishment of a real-time PCR method for quantification of geosmin-producing Streptomyces spp. in recirculating aquaculture systems.

    Science.gov (United States)

    Auffret, Marc; Pilote, Alexandre; Proulx, Emilie; Proulx, Daniel; Vandenberg, Grant; Villemur, Richard

    2011-12-15

    Geosmin and 2-methylisoborneol (MIB) have been associated with off-flavour problems in fish and seafood products, generating a strong negative impact for aquaculture industries. Although most of the producers of geosmin and MIB have been identified as Streptomyces species or cyanobacteria, Streptomyces spp. are thought to be responsible for the synthesis of these compounds in indoor recirculating aquaculture systems (RAS). The detection of genes involved in the synthesis of geosmin and MIB can be a relevant indicator of the beginning of off-flavour events in RAS. Here, we report a real-time polymerase chain reaction (qPCR) protocol targeting geoA sequences that encode a germacradienol synthase involved in geosmin synthesis. New geoA-related sequences were retrieved from eleven geosmin-producing Actinomycete strains, among them two Streptomyces strains isolated from two RAS. Combined with geoA-related sequences available in gene databases, we designed primers and standards suitable for qPCR assays targeting mainly Streptomyces geoA. Using our qPCR protocol, we succeeded in measuring the level of geoA copies in sand filter and biofilters in two RAS. This study is the first to apply qPCR assays to detect and quantify the geosmin synthesis gene (geoA) in RAS. Quantification of geoA in RAS could permit the monitoring of the level of geosmin producers prior to the occurrence of geosmin production. This information will be most valuable for fish producers to manage further development of off-flavour events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Evaluation of Time-Temperature Integrators (TTIs) with Microorganism-Entrapped Microbeads Produced Using Homogenization and SPG Membrane Emulsification Techniques.

    Science.gov (United States)

    Rahman, A T M Mijanur; Lee, Seung Ju; Jung, Seung Won

    2015-12-28

    A comparative study was conducted to evaluate precision and accuracy in controlling the temperature dependence of encapsulated microbial time-temperature integrators (TTIs) developed using two different emulsification techniques. Weissela cibaria CIFP 009 cells, immobilized within 2% Na-alginate gel microbeads using homogenization (5,000, 7,000, and 10,000 rpm) and Shirasu porous glass (SPG) membrane technologies (10 μm), were applied to microbial TTIs. The prepared micobeads were characterized with respect to their size, size distribution, shape and morphology, entrapment efficiency, and bead production yield. Additionally, fermentation process parameters including growth rate were investigated. The TTI responses (changes in pH and titratable acidity (TA)) were evaluated as a function of temperature (20°C, 25°C, and 30°C). In comparison with conventional methods, SPG membrane technology was able not only to produce highly uniform, small-sized beads with the narrowest size distribution, but also the bead production yield was found to be nearly 3.0 to 4.5 times higher. However, among the TTIs produced using the homogenization technique, poor linearity (R(2)) in terms of TA was observed for the 5,000 and 7,000 rpm treatments. Consequently, microbeads produced by the SPG membrane and by homogenization at 10,000 rpm were selected for adjusting the temperature dependence. The Ea values of TTIs containing 0.5, 1.0, and 1.5 g microbeads, prepared by SPG membrane and conventional methods, were estimated to be 86.0, 83.5, and 76.6 kJ/mol, and 85.5, 73.5, and 62.2 kJ/mol, respectively. Therefore, microbial TTIs developed using SPG membrane technology are much more efficient in controlling temperature dependence.

  2. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  3. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  4. A New Battery Energy Storage Charging/Discharging Scheme for Wind Power Producers in Real-Time Markets

    Directory of Open Access Journals (Sweden)

    Minh Y Nguyen

    2012-12-01

    Full Text Available Under a deregulated environment, wind power producers are subject to many regulation costs due to the intermittence of natural resources and the accuracy limits of existing prediction tools. This paper addresses the operation (charging/discharging problem of battery energy storage installed in a wind generation system in order to improve the value of wind power in the real-time market. Depending on the prediction of market prices and the probabilistic information of wind generation, wind power producers can schedule the battery energy storage for the next day in order to maximize the profit. In addition, by taking into account the expenses of using batteries, the proposed charging/discharging scheme is able to avoid the detrimental operation of battery energy storage which can lead to a significant reduction of battery lifetime, i.e., uneconomical operation. The problem is formulated in a dynamic programming framework and solved by a dynamic programming backward algorithm. The proposed scheme is then applied to the study cases, and the results of simulation show its effectiveness.

  5. ORLIB: a computer code that produces one-energy group, time- and spatially-averaged neutron cross sections

    International Nuclear Information System (INIS)

    Blink, J.A.; Dye, R.E.; Kimlinger, J.R.

    1981-12-01

    Calculation of neutron activation of proposed fusion reactors requires a library of neutron-activation cross sections. One such library is ACTL, which is being updated and expanded by Howerton. If the energy-dependent neutron flux is also known as a function of location and time, the buildup and decay of activation products can be calculated. In practice, hand calculation is impractical without energy-averaged cross sections because of the large number of energy groups. A widely used activation computer code, ORIGEN2, also requires energy-averaged cross sections. Accordingly, we wrote the ORLIB code to collapse the ACTL library, using the flux as a weighting function. The ORLIB code runs on the LLNL Cray computer network. We have also modified ORIGEN2 to accept the expanded activation libraries produced by ORLIB

  6. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  7. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  8. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  9. Rapid and Accurate Identification by Real-Time PCR of Biotoxin-Producing Dinoflagellates from the Family Gymnodiniaceae

    Directory of Open Access Journals (Sweden)

    Kirsty F. Smith

    2014-03-01

    Full Text Available The identification of toxin-producing dinoflagellates for monitoring programmes and bio-compound discovery requires considerable taxonomic expertise. It can also be difficult to morphologically differentiate toxic and non-toxic species or strains. Various molecular methods have been used for dinoflagellate identification and detection, and this study describes the development of eight real-time polymerase chain reaction (PCR assays targeting the large subunit ribosomal RNA (LSU rRNA gene of species from the genera Gymnodinium, Karenia, Karlodinium, and Takayama. Assays proved to be highly specific and sensitive, and the assay for G. catenatum was further developed for quantification in response to a bloom in Manukau Harbour, New Zealand. The assay estimated cell densities from environmental samples as low as 0.07 cells per PCR reaction, which equated to three cells per litre. This assay not only enabled conclusive species identification but also detected the presence of cells below the limit of detection for light microscopy. This study demonstrates the usefulness of real-time PCR as a sensitive and rapid molecular technique for the detection and quantification of micro-algae from environmental samples.

  10. Rapid and accurate identification by real-time PCR of biotoxin-producing dinoflagellates from the family gymnodiniaceae.

    Science.gov (United States)

    Smith, Kirsty F; de Salas, Miguel; Adamson, Janet; Rhodes, Lesley L

    2014-03-07

    The identification of toxin-producing dinoflagellates for monitoring programmes and bio-compound discovery requires considerable taxonomic expertise. It can also be difficult to morphologically differentiate toxic and non-toxic species or strains. Various molecular methods have been used for dinoflagellate identification and detection, and this study describes the development of eight real-time polymerase chain reaction (PCR) assays targeting the large subunit ribosomal RNA (LSU rRNA) gene of species from the genera Gymnodinium, Karenia, Karlodinium, and Takayama. Assays proved to be highly specific and sensitive, and the assay for G. catenatum was further developed for quantification in response to a bloom in Manukau Harbour, New Zealand. The assay estimated cell densities from environmental samples as low as 0.07 cells per PCR reaction, which equated to three cells per litre. This assay not only enabled conclusive species identification but also detected the presence of cells below the limit of detection for light microscopy. This study demonstrates the usefulness of real-time PCR as a sensitive and rapid molecular technique for the detection and quantification of micro-algae from environmental samples.

  11. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  12. Determination of times maximum insulation in case of internal flooding by pipe break; Determinacion de los tiempos maximos de aislamiento en caso de inundacion interna por rotura de tuberia

    Energy Technology Data Exchange (ETDEWEB)

    Varas, M. I.; Orteu, E.; Laserna, J. A.

    2014-07-01

    This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)

  13. Powered bone marrow biopsy procedures produce larger core specimens, with less pain, in less time than with standard manual devices

    Directory of Open Access Journals (Sweden)

    Larry J. Miller

    2011-07-01

    Full Text Available Bone marrow sampling remains essential in the evaluation of hematopoietic and many non-hematopoietic disorders. One common limitation to these procedures is the discomfort experienced by patients. To address whether a Powered biopsy system could reduce discomfort while providing equivalent or better results, we performed a randomized trial in adult volunteers. Twenty-six subjects underwent bilateral biopsies with each device. Core samples were obtained in 66.7% of Manual insertions; 100% of Powered insertions (P=0.002. Initial mean biopsy core lengths were 11.1±4.5 mm for the Manual device; 17.0±6.8 mm for the Powered device (P<0.005. Pathology assessment for the Manual device showed a mean length of 6.1±5.6 mm, width of 1.0±0.7 mm, and volume of 11.0±10.8 mm3. Powered device measurements were mean length of 15.3±6.1 mm, width of 2.0±0.3 mm, and volume of 49.1±21.5 mm3 (P<0.001. The mean time to core ejection was 86 seconds for Manual device; 47 seconds for the Powered device (P<0.001. The mean second look overall pain score was 33.3 for the Manual device; 20.9 for the Powered (P=0.039. We conclude that the Powered biopsy device produces superior sized specimens, with less overall pain, in less time.

  14. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  15. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  16. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  17. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  18. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  19. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  20. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  1. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  2. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  3. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  4. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  5. Sequence polymorphism can produce serious artefacts in real-time PCR assays: hard lessons from Pacific oysters

    Directory of Open Access Journals (Sweden)

    Camara Mark D

    2008-05-01

    Full Text Available Abstract Background Since it was first described in the mid-1990s, quantitative real time PCR (Q-PCR has been widely used in many fields of biomedical research and molecular diagnostics. This method is routinely used to validate whole transcriptome analyses such as DNA microarrays, suppressive subtractive hybridization (SSH or differential display techniques such as cDNA-AFLP (Amplification Fragment Length Polymorphism. Despite efforts to optimize the methodology, misleading results are still possible, even when standard optimization approaches are followed. Results As part of a larger project aimed at elucidating transcriptome-level responses of Pacific oysters (Crassostrea gigas to various environmental stressors, we used microarrays and cDNA-AFLP to identify Expressed Sequence Tag (EST fragments that are differentially expressed in response to bacterial challenge in two heat shock tolerant and two heat shock sensitive full-sib oyster families. We then designed primers for these differentially expressed ESTs in order to validate the results using Q-PCR. For two of these ESTs we tested fourteen primer pairs each and using standard optimization methods (i.e. melt-curve analysis to ensure amplification of a single product, determined that of the fourteen primer pairs tested, six and nine pairs respectively amplified a single product and were thus acceptable for further testing. However, when we used these primers, we obtained different statistical outcomes among primer pairs, raising unexpected but serious questions about their reliability. We hypothesize that as a consequence of high levels of sequence polymorphism in Pacific oysters, Q-PCR amplification is sub-optimal in some individuals because sequence variants in priming sites results in poor primer binding and amplification in some individuals. This issue is similar to the high frequency of null alleles observed for microsatellite markers in Pacific oysters. Conclusion This study highlights

  6. Self-produced Time Intervals Are Perceived as More Variable and/or Shorter Depending on Temporal Context in Subsecond and Suprasecond Ranges

    Directory of Open Access Journals (Sweden)

    Keita eMitani

    2016-06-01

    Full Text Available The processing of time intervals is fundamental for sensorimotor and cognitive functions. Perceptual and motor timing are often performed concurrently (e.g., playing a musical instrument. Although previous studies have shown the influence of body movements on time perception, how we perceive self-produced time intervals has remained unclear. Furthermore, it has been suggested that the timing mechanisms are distinct for the sub- and suprasecond ranges. Here, we compared perceptual performances for self-produced and passively presented time intervals in random contexts (i.e., multiple target intervals presented in a session across the sub- and suprasecond ranges (Experiment 1 and within the sub- (Experiment 2 and suprasecond (Experiment 3 ranges, and in a constant context (i.e., a single target interval presented in a session in the sub- and suprasecond ranges (Experiment 4. We show that self-produced time intervals were perceived as shorter and more variable across the sub- and suprasecond ranges and within the suprasecond range but not within the subsecond range in a random context. In a constant context, the self-produced time intervals were perceived as more variable in the suprasecond range but not in the subsecond range. The impairing effects indicate that motor timing interferes with perceptual timing. The dependence of impairment on temporal contexts suggests multiple timing mechanisms for the subsecond and suprasecond ranges. In addition, violation of the scalar property (i.e., a constant variability to target interval ratio was observed between the sub- and suprasecond ranges. The violation was clearer for motor timing than for perceptual timing. This suggests that the multiple timing mechanisms for the sub- and suprasecond ranges overlap more for perception than for motor. Moreover, the central tendency effect (i.e., where shorter base intervals are overestimated and longer base intervals are underestimated disappeared with subsecond

  7. Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study

    Directory of Open Access Journals (Sweden)

    Mary Hokazono

    Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.

  8. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  9. Achieving maximum baryon densities

    International Nuclear Information System (INIS)

    Gyulassy, M.

    1984-01-01

    In continuing work on nuclear stopping power in the energy range E/sub lab/ approx. 10 GeV/nucleon, calculations were made of the energy and baryon densities that could be achieved in uranium-uranium collisions. Results are shown. The energy density reached could exceed 2 GeV/fm 3 and baryon densities could reach as high as ten times normal nuclear densities

  10. Maximum entropy tokamak configurations

    International Nuclear Information System (INIS)

    Minardi, E.

    1989-01-01

    The new entropy concept for the collective magnetic equilibria is applied to the description of the states of a tokamak subject to ohmic and auxiliary heating. The condition for the existence of steady state plasma states with vanishing entropy production implies, on one hand, the resilience of specific current density profiles and, on the other, severe restrictions on the scaling of the confinement time with power and current. These restrictions are consistent with Goldston scaling and with the existence of a heat pinch. (author)

  11. Efeito do tempo de cura na rigidez de argamassas produzidas com cimento Portland Effect of the curing time on the stiffness of mortars produced with Portland cement

    Directory of Open Access Journals (Sweden)

    G. C. R. Garcia

    2011-03-01

    Full Text Available O concreto de cimento Portland é um dos materiais mais usados no mundo inteiro, entretanto, devido a sua estrutura ser muito complexa, torna-se imprescindível estudar suas propriedades com bastante profundidade. O concreto é produzido a partir de uma argamassa, de areia e cimento, com adição de agregados graúdos, sendo que suas propriedades estão basicamente suportadas nessa argamassa de constituição. O objetivo deste trabalho foi estudar a variação da rigidez de duas argamassas de composições com razão cimento:areia de 1:2 e 1:3 em função do tempo de cura, tendo como parâmetro a variação do módulo de Young. Os resultados mostraram que o módulo de Young cresce até atingir o valor máximo no oitavo dia, sendo que nos três primeiros dias esse crescimento é mais acentuado. A análise dos resultados indica que grande parte do processo de hidratação do cimento, com formação das ligações químicas responsáveis pela rigidez da argamassa, acontece nos primeiros dias de cura.Concrete produced with Portland cement is one of building materials most widely used worldwide. However, due to its highly complex structure, its properties require in-depth studies. Concrete is a mortar consisting of a mixture of cement, sand and coarse aggregates, and its properties are represented basically by the mortar base. The aim of this work was to study the change in stiffness of two mortar compositions cured at 25 ºC with a cement-to-sand ratio of 1:2 and 1:3, as a function of curing time using the variation of Young modulus as the measuring parameter. The results showed that Young modulus increases up to a maximum value on the 8th day, and that this increase is more pronounced during the first three days. An analysis of the results indicates that a large part of the cement hydration process, involving the formation of chemical bonds that are responsible for the mortar stiffness, takes place in the early days of curing.

  12. Maximum Credible Incidents

    CERN Document Server

    Strait, J

    2009-01-01

    Following the incident in sector 34, considerable effort has been made to improve the systems for detecting similar faults and to improve the safety systems to limit the damage if a similar incident should occur. Nevertheless, even after the consolidation and repairs are completed, other faults may still occur in the superconducting magnet systems, which could result in damage to the LHC. Such faults include both direct failures of a particular component or system, or an incorrect response to a “normal” upset condition, for example a quench. I will review a range of faults which could be reasonably expected to occur in the superconducting magnet systems, and which could result in substantial damage and down-time to the LHC. I will evaluate the probability and the consequences of such faults, and suggest what mitigations, if any, are possible to protect against each.

  13. Space-time scenarios of wind power generation produced using a Gaussian copula with parametrized precision matrix

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Madsen, Henrik

    The emphasis in this work is placed on generating space-time trajectories (also referred to as scenarios) of wind power generation. This calls for prediction of multivariate densities describing wind power generation at a number of distributed locations and for a number of successive lead times. ...

  14. Attributing impacts to emissions traced to major fossil energy and cement producers over specific historical time periods

    Science.gov (United States)

    Ekwurzel, B.; Frumhoff, P. C.; Allen, M. R.; Boneham, J.; Heede, R.; Dalton, M. W.; Licker, R.

    2017-12-01

    Given the progress in climate change attribution research over the last decade, attribution studies can inform policymakers guided by the UNFCCC principle of "common but differentiated responsibilities." Historically this has primarily focused on nations, yet requests for information on the relative role of the fossil energy sector are growing. We present an approach that relies on annual CH4 and CO2 emissions from production through to the sale of products from the largest industrial fossil fuel and cement production company records from the mid-nineteenth century to present (Heede 2014). Analysis of the global trends with all the natural and human drivers compared with a scenario without the emissions traced to major carbon producers over full historical versus select periods of recent history can be policy relevant. This approach can be applied with simple climate models and earth system models depending on the type of climate impacts being investigated. For example, results from a simple climate model, using best estimate parameters and emissions traced to 90 largest carbon producers, illustrate the relative difference in global mean surface temperature increase over 1880-2010 after removing these emissions from 1980-2010 (29-35%) compared with removing these emissions over 1880-2010 (42-50%). The changing relative contributions from the largest climate drivers can be important to help assess the changing risks for stakeholders adapting to and reducing exposure and vulnerability to regional climate change impacts.

  15. Space-time scenarios of wind power generation produced using a Gaussian copula with parametrized precision matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tastu, J.; Pinson, P.; Madsen, Henrik

    2013-09-01

    The emphasis in this work is placed on generating space-time trajectories (also referred to as scenarios) of wind power generation. This calls for prediction of multivariate densities describing wind power generation at a number of distributed locations and for a number of successive lead times. A modelling approach taking advantage of sparsity of precision matrices is introduced for the description of the underlying space-time dependence structure. The proposed parametrization of the dependence structure accounts for such important process characteristics as non-constant conditional precisions and direction-dependent cross-correlations. Accounting for the space-time effects is shown to be crucial for generating high quality scenarios. (Author)

  16. Time-dependent H-like and He-like Al lines produced by ultra-short pulse laser

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Takako; Kato, Masatoshi [National Inst. for Fusion Science, Nagoya (Japan); Shepherd, R; Young, B; More, R; Osterheld, Al

    1998-03-01

    We have performed numerical modeling of time-resolved x-ray spectra from thin foil targets heated by the LLNL Ultra-short pulse (USP) laser. The targets were aluminum foils of thickness ranging from 250 A to 1250 A, heated with 120 fsec pulses of 400 nm light from the USP laser. The laser energy was approximately 0.2 Joules, focused to a 3 micron spot size for a peak intensity near 2 x 10{sup 19} W/cm{sup 2}. Ly{alpha} and He{alpha} lines were recorded using a 900 fsec x-ray streak camera. We calculate the effective ionization, recombination and emission rate coefficients including density effects for H-like and He-like aluminum ions using a collisional radiative model. We calculate time-dependent ion abundances using these effective ionization and recombination rate coefficients. The time-dependent electron temperature and density used in the calculation are based on an analytical model for the hydrodynamic expansion of the target foils. During the laser pulse the target is ionized. After the laser heating stops, the plasma begins to recombine. Using the calculated time dependent ion abundances and the effective emission rate coefficients, we calculate the time dependent Ly{alpha} and He{alpha} lines. The calculations reproduce the main qualitative features of the experimental spectra. (author)

  17. High resolution soft X-Ray spectrometer with 5-picosecond time-resolution for laser-produced plasma diagnostics

    International Nuclear Information System (INIS)

    Mexmain, J.M.; Bourgade, J.L.; Louis-Jacquet, M.; Mascureau, J. de; Sauneuf, R.; Schwob, J.L.

    1987-01-01

    A new XUV spectrometer designed to have a time-resolution of 3 ps and a spectral resolution of 0.1 A is described. It is basically a modified version of a Schwob-Fraenkel spectrometer, which is coupled to a new ultrafast electronic streak camera

  18. A new method for obtaining time resolved optical spectra of transients produced by a single pulse of electrons

    International Nuclear Information System (INIS)

    Gordon, S.; Schmidt, K.H.; Martin, J.E.

    1975-01-01

    The essential features of the kinetic spectroscopic method and the kinetic spectrophotometric method are summarized. It is stated that the new method embodies some of the advantages of both. A diagram of the apparatus is shown. This is essentially a version of a conventional pulse radiolysis experimental arrangement with the modification that the usual monochromator is replaced by a spectrograph equipped with a horizontal and a vertical slit and the usual photomultiplier-amplifier detector is replaced by a streak camera (TRW) incorporating an image converter tube (ICT) and a TV camera interfaced to a 2000 channel Biomation transient recorder. The time resolved absorption spectrum (or emission spectrum) is displayed on the P-11 phosphor of the ICT. This image is focussed on the photoelements of the TV tube. The TV camera scans the image of the spectrum stored on these elements and the output of this scan is stored in the Biomation. This recorder is in turn interfaced to a Sigma 5 computer. Results are presented for several experiments, from which it is concluded that with the present equipment absorbances down to 0.02 can be measured, and a time resolution of 1ns can be achieved. It is stated that with improved equipment it should be possible to extend the time resolution of the method to less than 50 picoseconds. (U.K.)

  19. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  20. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  1. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  2. Adsorption of Pb(II and Cu(II by Ginkgo-Leaf-Derived Biochar Produced under Various Carbonization Temperatures and Times

    Directory of Open Access Journals (Sweden)

    Myoung-Eun Lee

    2017-12-01

    Full Text Available Ginkgo trees are common street trees in Korea, and the large amounts of leaves that fall onto the streets annually need to be cleaned and treated. Therefore, fallen gingko leaves have been used as a raw material to produce biochar for the removal of heavy metals from solutions. Gingko-leaf-derived biochar was produced under various carbonization temperatures and times. This study evaluated the physicochemical properties and adsorption characteristics of gingko-leaf-derived biochar samples produced under different carbonization conditions regarding Pb(II and Cu(II. The biochar samples that were produced at 800 °C for 90 and 120 min contained the highest oxygen- and nitrogen-substituted carbons, which might contribute to a high metal-adsorption rate. The intensity of the phosphate bond was increased with the increasing of the carbonization temperature up to 800 °C and after 90 min of carbonization. The Pb(II and Cu(II adsorption capacities were the highest when the gingko-leaf-derived biochar was produced at 800 °C, and the removal rates were 99.2% and 34.2%, respectively. The highest removal rate was achieved when the intensity of the phosphate functional group in the biochar was the highest. Therefore, the gingko-leaf-derived biochar produced at 800 °C for 90 min can be used as an effective bio-adsorbent in the removal of metals from solutions.

  3. Evaluation of a real-time PCR assay for rectal screening of OXA-48-producing Enterobacteriaceae in a general intensive care unit of an endemic hospital.

    Science.gov (United States)

    Fernández, J; Cunningham, S A; Fernández-Verdugo, A; Viña-Soria, L; Martín, L; Rodicio, M R; Escudero, D; Vazquez, F; Mandrekar, J N; Patel, R

    2017-07-01

    Carbapenemase-producing Enterobacteriaceae are increasing worldwide. Rectal screening for these bacteria can inform the management of infected and colonized patients, especially those admitted to intensive care units (ICUs). A laboratory developed, qualitative duplex real-time polymerase chain reaction assay for rapid detection of OXA-48-like and VIM producing Enterobacteriaceae, performed on rectal swabs, was designed and evaluated in an intensive care unit with endemic presence of OXA-48. During analytical assay validation, no cross-reactivity was observed and 100% sensitivity and specificity were obtained for both bla OXA-48-like and bla VIM in all spiked clinical samples. During the clinical part of the study, the global sensitivity and specificity of the real-time PCR assay for OXA-48 detection were 95.7% and 100% (P=0.1250), respectively, in comparison with culture; no VIM-producing Enterobacteriaceae were detected. Clinical features of patients in the ICU who were colonized or infected with OXA-48 producing Enterobacteriaceae, including outcome, were analyzed. Most had severe underlying conditions, and had risk factors for colonization with carbapenemase-producing Enterobacteriaceae before or during ICU admission, such as receiving previous antimicrobial therapy, prior healthcare exposure (including long-term care), chronic disease, immunosuppression and/or the presence of an intravascular catheter and/or mechanical ventilation device. The described real-time PCR assay is fast (~2-3hours, if DNA extraction is included), simple to perform and results are easy to interpret, features which make it applicable in the routine of clinical microbiology laboratories. Implementation in endemic hospitals could contribute to early detection of patients colonized by OXA-48 producing Enterobacteriaceae and prevention of their spread. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Broadband time-resolved elliptical crystal spectrometer for X-ray spectroscopic measurements in laser-produced plasmas

    International Nuclear Information System (INIS)

    Wang Rui-Rong; Jia Guo; Fang Zhi-Heng; Wang Wei; Meng Xiang-Fu; Xie Zhi-Yong; Zhang Fan

    2014-01-01

    The X-ray spectrometer used in high-energy-density plasma experiments generally requires both broad X-ray energy coverage and high temporal, spatial, and spectral resolutions for overcoming the difficulties imposed by the X-ray background, debris, and mechanical shocks. By using an elliptical crystal together with a streak camera, we resolve this issue at the SG-II laser facility. The carefully designed elliptical crystal has a broad spectral coverage with high resolution, strong rejection of the diffuse and/or fluorescent background radiation, and negligible source broadening for extended sources. The spectra that are Bragg reflected (23° < θ < 38°) from the crystal are focused onto a streak camera slit 18 mm long and about 80 μm wide, to obtain a time-resolved spectrum. With experimental measurements, we demonstrate that the quartz(1011) elliptical analyzer at the SG-II laser facility has a single-shot spectral range of (4.64–6.45) keV, a typical spectral resolution of E/ΔE = 560, and an enhanced focusing power in the spectral dimension. For titanium (Ti) data, the lines of interest show a distribution as a function of time and the temporal variations of the He-α and Li-like Ti satellite lines and their spatial profiles show intensity peak red shifts. The spectrometer sensitivity is illustrated with a temporal resolution of better than 25 ps, which satisfies the near-term requirements of high-energy-density physics experiments. (atomic and molecular physics)

  5. 2D Time-lapse Resistivity Monitoring of an Organic Produced Gas Plume in a Landfill using ERT.

    Science.gov (United States)

    Amaral, N. D.; Mendonça, C. A.; Doherty, R.

    2014-12-01

    This project has the objective to study a landfill located on the margins of Tietê River, in São Paulo, Brazil, using the electroresistivity tomography method (ERT). Due to huge organic matter concentrations in the São Paulo Basin quaternary sediments, there is subsurface depth related biogas accumulation (CH4 and CO2), induced by anaerobic degradation of the organic matter. 2D resistivity sections were obtained from a test area since March 2012, a total of 7 databases, being the last one dated from October 2013. The studied line has the length of 56m, the electrode interval is of 2m. In addition, there are two boreholes along the line (one with 3 electrodes and the other one with 2) in order to improve data quality and precision. The boreholes also have a multi-level sampling system that indicates the fluid (gas or water) presence in relation to depth. With our results it was possible to map the gas plume position and its area of extension in the sections as it is a positive resistivity anomaly, with the gas level having approximately 5m depth. With the time-lapse analysis (Matlab script) between the obtained 2D resistivity sections from the site, it was possible to map how the biogas volume and position change in the landfill in relation to time. Our preliminary results show a preferential gas pathway through the subsurface studied area. A consistent relation between the gas depth and obtained microbiological data from archea and bacteria population was also observed.

  6. Wood pellets, what else? Greenhouse gas parity times of European electricity from wood pellets produced in the south-eastern United States using different softwood feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Hanssen, Steef V. [Radboud Univ., Nijmegen (Netherlands). Dept. of Environmental Science, Faculty of Science; Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences; Duden, Anna S. [Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences; Junginger, Martin [Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences; Dale, Virginia H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division, Center for BioEnergy Sustainability; van der Hilst, Floor [Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences

    2016-12-29

    Several EU countries import wood pellets from the south-eastern United States. The imported wood pellets are (co-)fired in power plants with the aim of reducing overall greenhouse gas (GHG) emissions from electricity and meeting EU renewable energy targets. To assess whether GHG emissions are reduced and on what timescale, we construct the GHG balance of wood-pellet electricity. This GHG balance consists of supply chain and combustion GHG emissions, carbon sequestration during biomass growth, and avoided GHG emissions through replacing fossil electricity. We investigate wood pellets from four softwood feedstock types: small roundwood, commercial thinnings, harvest residues, and mill residues. Per feedstock, the GHG balance of wood-pellet electricity is compared against those of alternative scenarios. Alternative scenarios are combinations of alternative fates of the feedstock material, such as in-forest decomposition, or the production of paper or wood panels like oriented strand board (OSB). Alternative scenario composition depends on feedstock type and local demand for this feedstock. Results indicate that the GHG balance of wood-pellet electricity equals that of alternative scenarios within 0 to 21 years (the GHG parity time), after which wood-pellet electricity has sustained climate benefits. Parity times increase by a maximum of twelve years when varying key variables (emissions associated with paper and panels, soil carbon increase via feedstock decomposition, wood-pellet electricity supply chain emissions) within maximum plausible ranges. Using commercial thinnings, harvest residues or mill residues as feedstock leads to the shortest GHG parity times (0-6 years) and fastest GHG benefits from wood-pellet electricity. Here, we find shorter GHG parity times than previous studies, for we use a novel approach that differentiates feedstocks and considers alternative scenarios based on (combinations of) alternative feedstock fates, rather than on alternative land

  7. Effect of milling time and CNT concentration on hardness of CNT/Al2024 composites produced by mechanical alloying

    International Nuclear Information System (INIS)

    Pérez-Bustamante, R.; Pérez-Bustamante, F.; Estrada-Guel, I.; Licea-Jiménez, L.; Miki-Yoshida, M.; Martínez-Sánchez, R.

    2013-01-01

    Carbon nanotube/2024 aluminum alloy (CNT/Al 2024 ) composites were fabricated with a combination of mechanical alloying (MA) and powder metallurgy routes. Composites were microstructurally and mechanically evaluated at sintering condition. A homogeneous dispersion of CNTs in the Al matrix was observed by a field emission scanning electron microscopy. High-resolution transmission electron microscopy confirmed not only the presence of well dispersed CNTs but also needle-like shape aluminum carbide (Al 4 C 3 ) crystals in the Al matrix. The formation of Al 4 C 3 was suggested as the interaction between the outer shells of CNTs and the Al matrix during MA process in which crystallization took place after the sintering process. The mechanical behavior of composites was evaluated by Vickers microhardness measurements indicating a significant improvement in hardness as function of the CNT content. This improvement was associated to a homogeneous dispersion of CNTs and the presence of Al 4 C 3 in the aluminum alloy matrix. - Highlights: ► The 2024 aluminum alloy was reinforced by CNTs by mechanical alloying process. ► Composites were microstructural and mechanically evaluated after sintering condition. ► The greater the CNT concentration, the greater the hardness of the composites. ► Higher hardness in composites is achieved at 20 h of milling. ► The formation of Al 4 C 3 does not present a direct relationship with the milling time.

  8. Time-dependent of characteristics of Cu/CuS/n-GaAs/In structure produced by SILAR method

    Energy Technology Data Exchange (ETDEWEB)

    Sağlam, M.; Güzeldir, B., E-mail: msaglam@atauni.edu.tr

    2016-09-15

    Highlights: • The CuS thin film used at Cu/n-GaAs structure is grown by SILAR method. • There has been no report on ageing of characteristics of this junction in the literature. • The properties of Cu/CuS/n-GaAs/In structure are examined with different methods. • It has been shown that Cu/CuS/n-GaAs/In structure has a stable interface. - Abstract: The aim of this study is to explain effects of the ageing on the electrical properties of Cu/n-GaAs Shottky barrier diode with Copper Sulphide (CuS) interfacial layer. CuS thin films are deposited on n-type GaAs substrate by Successive Ionic Layer Adsorption and Reaction (SILAR) method at room temperature. The structural and the morphological properties of the films have been carried out by Scanning Electron Microscopy (SEM) and X-Ray Diffraction (XRD) techniques. The XRD analysis of as-grown films showed the single-phase covellite, with hexagonal crystal structure built around two preferred orientations corresponding to (102) and (108) atomic planes. The ageing effects on the electrical properties of Cu/CuS/n-GaAs/In structure have been investigated. The current–voltage (I–V) measurements at room temperature have been carried out to study the change in electrical characteristics of the devices as a function of ageing time. The main electrical parameters, such as ideality factor (n), barrier height (Φ{sub b}), series resistance (R{sub s}), leakage current (I{sub 0}), and interface states (N{sub ss}) for this structure have been calculated. The results show that the main electrical parameters of device remained virtually unchanged.

  9. Time-dependent of characteristics of Cu/CuS/n-GaAs/In structure produced by SILAR method

    International Nuclear Information System (INIS)

    Sağlam, M.; Güzeldir, B.

    2016-01-01

    Highlights: • The CuS thin film used at Cu/n-GaAs structure is grown by SILAR method. • There has been no report on ageing of characteristics of this junction in the literature. • The properties of Cu/CuS/n-GaAs/In structure are examined with different methods. • It has been shown that Cu/CuS/n-GaAs/In structure has a stable interface. - Abstract: The aim of this study is to explain effects of the ageing on the electrical properties of Cu/n-GaAs Shottky barrier diode with Copper Sulphide (CuS) interfacial layer. CuS thin films are deposited on n-type GaAs substrate by Successive Ionic Layer Adsorption and Reaction (SILAR) method at room temperature. The structural and the morphological properties of the films have been carried out by Scanning Electron Microscopy (SEM) and X-Ray Diffraction (XRD) techniques. The XRD analysis of as-grown films showed the single-phase covellite, with hexagonal crystal structure built around two preferred orientations corresponding to (102) and (108) atomic planes. The ageing effects on the electrical properties of Cu/CuS/n-GaAs/In structure have been investigated. The current–voltage (I–V) measurements at room temperature have been carried out to study the change in electrical characteristics of the devices as a function of ageing time. The main electrical parameters, such as ideality factor (n), barrier height (Φ_b), series resistance (R_s), leakage current (I_0), and interface states (N_s_s) for this structure have been calculated. The results show that the main electrical parameters of device remained virtually unchanged.

  10. Can DEM time series produced by UAV be used to quantify diffuse erosion in an agricultural watershed?

    Science.gov (United States)

    Pineux, N.; Lisein, J.; Swerts, G.; Bielders, C. L.; Lejeune, P.; Colinet, G.; Degré, A.

    2017-03-01

    Erosion and deposition modelling should rely on field data. Currently these data are seldom available at large spatial scales and/or at high spatial resolution. In addition, conventional erosion monitoring approaches are labour intensive and costly. This calls for the development of new approaches for field erosion data acquisition. As a result of rapid technological developments and low cost, unmanned aerial vehicles (UAV) have recently become an attractive means of generating high resolution digital elevation models (DEMs). The use of UAV to observe and quantify gully erosion is now widely established. However, in some agro-pedological contexts, soil erosion results from multiple processes, including sheet and rill erosion, tillage erosion and erosion due to harvest of root crops. These diffuse erosion processes often represent a particular challenge because of the limited elevation changes they induce. In this study, we propose to assess the reliability and development perspectives of UAV to locate and quantify erosion and deposition in a context of an agricultural watershed with silt loam soils and a smooth relief. Erosion and deposition rates derived from high resolution DEM time series are compared to field measurements. The UAV technique demonstrates a high level of flexibility and can be used, for instance, after a major erosive event. It delivers a very high resolution DEM (pixel size: 6 cm) which allows us to compute high resolution runoff pathways. This could enable us to precisely locate runoff management practices such as fascines. Furthermore, the DEMs can be used diachronically to extract elevation differences before and after a strongly erosive rainfall and be validated by field measurements. While the analysis for this study was carried out over 2 years, we observed a tendency along the slope from erosion to deposition. Erosion and deposition patterns detected at the watershed scale are also promising. Nevertheless, further development in the

  11. The ideal harvest time for seeds of hybrid maize (Zea mays L.) XY335 and ZD958 produced in multiple environments.

    Science.gov (United States)

    Gu, Riliang; Li, Li; Liang, Xiaolin; Wang, Yanbo; Fan, Tinglu; Wang, Ying; Wang, Jianhua

    2017-12-13

    To identify the ideal harvest time (IHT) for the seed production of XY335 and ZD958, six seed-related traits were evaluated in seeds harvested at 11 harvest stages in 8 environments. Standard germination (SG), accelerated aging germination (AAG) and cold test germination (CTG) were vigor traits; hundred-seed weight (HSW) and seed moisture content (SMC) were physiological traits; and ≥10 °C accumulated temperature from pollination to harvest (AT10 ph ) was an ecological trait. All the traits were significantly affected by harvest stage. The responses of SG, AAG, CTG and HSW to postponing harvest stage fit quadratic models, while SMC and AT10 ph fit linear models. The IHT (indicated by the last date to reach maximum SG, AAG and CTG) were 57.97 DAP and 56.80 DAP for XY335 and ZD958, respectively. SMC and AT10 ph at IHT were 33.15% and 1234 °C for XY335, and 34.98% and 1226 °C for ZD958, respectively. The period to reach the maximum HSW was 5 days later than the IHT. Compared to HSW and SMC, AT10 ph had a closer relationship to the seed vigor traits. Together with the fact that AT10 ph was less affected by environment, these results suggested that AT10 ph may be a novel indicator for determining the IHT.

  12. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    OpenAIRE

    Rao, Yuhan; Zhu, Xiaolin; Chen, Jin; Wang, Jianmin

    2015-01-01

    Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal) NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM), is proposed to achieve the goal of accurately and efficiently bl...

  13. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system.

    Science.gov (United States)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  14. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  15. Evaluation of an Improved U.S. Food and Drug Administration Method for the Detection of Cyclospora cayetanensis in Produce Using Real-Time PCR.

    Science.gov (United States)

    Murphy, Helen R; Lee, Seulgi; da Silva, Alexandre J

    2017-07-01

    Cyclospora cayetanensis is a protozoan parasite that causes human diarrheal disease associated with the consumption of fresh produce or water contaminated with C. cayetanensis oocysts. In the United States, foodborne outbreaks of cyclosporiasis have been linked to various types of imported fresh produce, including cilantro and raspberries. An improved method was developed for identification of C. cayetanensis in produce at the U.S. Food and Drug Administration. The method relies on a 0.1% Alconox produce wash solution for efficient recovery of oocysts, a commercial kit for DNA template preparation, and an optimized TaqMan real-time PCR assay with an internal amplification control for molecular detection of the parasite. A single laboratory validation study was performed to assess the method's performance and compare the optimized TaqMan real-time PCR assay and a reference nested PCR assay by examining 128 samples. The samples consisted of 25 g of cilantro or 50 g of raspberries seeded with 0, 5, 10, or 200 C. cayetanensis oocysts. Detection rates for cilantro seeded with 5 and 10 oocysts were 50.0 and 87.5%, respectively, with the real-time PCR assay and 43.7 and 94.8%, respectively, with the nested PCR assay. Detection rates for raspberries seeded with 5 and 10 oocysts were 25.0 and 75.0%, respectively, with the real-time PCR assay and 18.8 and 68.8%, respectively, with the nested PCR assay. All unseeded samples were negative, and all samples seeded with 200 oocysts were positive. Detection rates using the two PCR methods were statistically similar, but the real-time PCR assay is less laborious and less prone to amplicon contamination and allows monitoring of amplification and analysis of results, making it more attractive to diagnostic testing laboratories. The improved sample preparation steps and the TaqMan real-time PCR assay provide a robust, streamlined, and rapid analytical procedure for surveillance, outbreak response, and regulatory testing of foods for

  16. Quantification of ochratoxin A-producing molds in food products by SYBR Green and TaqMan real-time PCR methods

    DEFF Research Database (Denmark)

    Rodríguez, Alicia; Rodríguez, Mar; Luque, M. Isabel

    2011-01-01

    , usually reported in food products, were used as references. All strains were tested for OTA production by mycellar electrokinetic capillary electrophoresis (MECE) and high-pressure liquid chromatography-mass spectrometry (HPLC-MS). The ability of the optimized qPCR protocols to quantify OTA......Ochratoxin A (OTA) is a mycotoxin synthesized by a variety of different fungi, most of them from the genera Penicillium and Aspergillus. Early detection and quantification of OTA producing species is crucial to improve food safety. In the present work, two protocols of real-time qPCR based on SYBR......-producing molds was evaluated in different artificially inoculated foods. A good linear correlation was obtained over the range 1 x 104 to 10 conidia/g per reaction for all qPCR assays in the different food matrices (cooked and cured products and fruits). The detection limit in all inoculated foods ranged between...

  17. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  18. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  19. Development and Use of a Real-Time Quantitative PCR Method for Detecting and Quantifying Equol-Producing Bacteria in Human Faecal Samples and Slurry Cultures

    Directory of Open Access Journals (Sweden)

    Lucía Vázquez

    2017-06-01

    Full Text Available This work introduces a novel real-time quantitative PCR (qPCR protocol for detecting and quantifying equol-producing bacteria. To this end, two sets of primers targeting the dihydrodaidzein reductase (ddr and tetrahydrodaidzein reductase (tdr genes, which are involved in the synthesis of equol, were designed. The primers showed high specificity and sensitivity when used to examine DNA from control bacteria, such as Slackia isoflavoniconvertens, Slackia equolifaciens, Asaccharobacter celatus, Adlercreutzia equolifaciens, and Enterorhabdus mucosicola. To demonstrate the validity and reliability of the protocol, it was used to detect and quantify equol-producing bacteria in human faecal samples and their derived slurry cultures. These samples were provided by 18 menopausal women under treatment of menopause symptoms with a soy isoflavone concentrate, among whom three were known to be equol-producers given the prior detection of the molecule in their urine. The tdr gene was detected in the faeces of all these equol-producing women at about 4–5 log10 copies per gram of faeces. In contrast, the ddr gene was only amplified in the faecal samples of two of these three women, suggesting the presence in the non-amplified sample of reductase genes unrelated to those known to be involved in equol formation and used for primer design in this study. When tdr and ddr were present in the same sample, similar copy numbers of the two genes were recorded. However, no significant increase in the copy number of equol-related genes along isoflavone treatment was observed. Surprisingly, positive amplification for both tdr and ddr genes was obtained in faecal samples and derived slurry cultures from two non-equol producing women, suggesting the genes could be non-functional or the daidzein metabolized to other compounds in samples from these two women. This novel qPCR tool provides a technique for monitoring gut microbes that produce equol in humans. Monitoring equol-producing

  20. Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices

    Directory of Open Access Journals (Sweden)

    Luis L. Bonilla

    2016-07-01

    Full Text Available Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.

  1. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  2. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  3. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  4. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  5. GREX/COVER-PLASTEX: an experiment to analyze the space-time structure of extensive air showers produced by primary cosmic rays of 1015 eV

    International Nuclear Information System (INIS)

    Agnetta, G.; Ambrosio, M.; Beaman, J.; Barbarino, G.C.; Biondo, B.; Catalano, O.; Colesanti, L.; Dali, G.; Guarino, F.; Lauro, A.; Lloyd-Evans, J.; Mangano, A.; Popova, L.; Watson, A.A.

    1995-01-01

    A novel experimental installation is described in which the traditional method of detecting extensive air showers with scintillation counters is significantly extended by the addition of limited streamer tube hodoscopes (LST) and layers of resistive plate counters (RPC). Runs with the scintillator array, GREX, at Haverah Park have demonstrated the power of the LST hodoscopes to determine the direction of arrival of muons, electrons and photons in air showers while the RPC system permits the relative arrival time of individual particles and the temporal thickness and structure of the shower disc to be obtained. The potential of these technical advances for studying the longitudinal profile of air showers produced by primaries of about 1000 TeV is briefly discussed. First measurements of thickness and time profile of EAS front are also reported. (orig.)

  6. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  7. Improving timing sensitivity in the microhertz frequency regime: limits from PSR J1713+0747 on gravitational waves produced by super-massive black-hole binaries

    Science.gov (United States)

    Perera, B. B. P.; Stappers, B. W.; Babak, S.; Keith, M. J.; Antoniadis, J.; Bassa, C. G.; Caballero, R. N.; Champion, D. J.; Cognard, I.; Desvignes, G.; Graikou, E.; Guillemot, L.; Janssen, G. H.; Karuppusamy, R.; Kramer, M.; Lazarus, P.; Lentati, L.; Liu, K.; Lyne, A. G.; McKee, J. W.; Osłowski, S.; Perrodin, D.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Theureau, G.; Verbiest, J. P. W.; Taylor, S. R.

    2018-05-01

    We search for continuous gravitational waves (CGWs) produced by individual super-massive black-hole binaries (SMBHBs) in circular orbits using high-cadence timing observations of PSR J1713+0747. We observe this millisecond pulsar using the telescopes in the European Pulsar Timing Array (EPTA) with an average cadence of approximately 1.6 days over the period between April 2011 and July 2015, including an approximately daily average between February 2013 and April 2014. The high-cadence observations are used to improve the pulsar timing sensitivity across the GW frequency range of 0.008 - 5 μHz. We use two algorithms in the analysis, including a spectral fitting method and a Bayesian approach. For an independent comparison, we also use a previously published Bayesian algorithm. We find that the Bayesian approaches provide optimal results and the timing observations of the pulsar place a 95 per cent upper limit on the sky-averaged strain amplitude of CGWs to be ≲ 3.5 × 10-13 at a reference frequency of 1 μHz. We also find a 95 per cent upper limit on the sky-averaged strain amplitude of low-frequency CGWs to be ≲ 1.4 × 10-14 at a reference frequency of 20 nHz.

  8. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  9. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  10. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  11. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  12. Timing of last deglaciation in the Cantabrian Mountains (Iberian Peninsula; North Atlantic Region) based on in situ-produced 10Be exposure dating

    Science.gov (United States)

    Rodríguez-Rodríguez, Laura; Jiménez-Sánchez, Montserrat; Domínguez-Cuesta, María José; Rinterknecht, Vincent; Pallàs, Raimon; Aumaître, Georges; Bourlès, Didier L.; Keddadouche, Karim; Aster Team

    2017-09-01

    The Last Glacial Termination led to major changes in ice sheet coverage that disrupted global patterns of atmosphere and ocean circulation. Paleoclimate records from Iberia suggest that westerly episodes played a key role in driving heterogeneous climate in the North Atlantic Region. We used 10Be Cosmic Ray Exposure (CRE) dating to explore the glacier response of small mountain glaciers (ca. 5 km2) that developed on the northern slope of the Cantabrian Mountains (Iberian Peninsula), an area directly under the influence of the Atlantic westerly winds. We analyzed twenty boulders from three moraines and one rock glacier arranged as a recessional sequence preserved between 1150 and 1540 m above sea level (a.s.l.) in the Monasterio valley (Redes Natural Park). Results complement previous chronologic data based on radiocarbon and optically stimulated luminescence from the Monasterio valley, which suggest a local Glacial Maximum (local GM) prior to 33 ka BP and a long-standing glacier advance at 24 ka coeval to the global Last Glacial Maximum (LGM). Resultant 10Be CRE ages suggest a progressive retreat and thinning of the Monasterio glacier over the time interval 18.1-16.7 ka. This response is coeval with the Heinrich Stadial 1, an extremely cold and dry climate episode initiated by a weakening of the Atlantic Meridional Overturning Circulation (AMOC). Glacier recession continued through the Bølling/Allerød period as indicate the minimum exposure ages obtained from a cirque moraine and a rock glacier nested within this moraine, which yielded ages of 14.0 and 13.0 ka, respectively. Together, they suggest that the Monasterio glacier experienced a gradual transition from glacier to rock glacier activity as the AMOC started to strengthen again. Glacial evidence ascribable to the Younger Dryas cooling was not dated in the Monasterio valley, but might have occurred at higher elevations than evidence dated in this work. The evolution of former glaciers documented in the

  13. METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS

    Directory of Open Access Journals (Sweden)

    DRIŞCU Mariana

    2014-05-01

    Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.

  14. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    Directory of Open Access Journals (Sweden)

    Yuhan Rao

    2015-06-01

    Full Text Available Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM, is proposed to achieve the goal of accurately and efficiently blending MODIS NDVI time-series data and multi-temporal Landsat TM/ETM+ images. This method firstly unmixes the NDVI temporal changes in MODIS time-series to different land cover types and then uses unmixed NDVI temporal changes to predict Landsat-like NDVI dataset. The test over a forest site shows high accuracy (average difference: −0.0070; average absolute difference: 0.0228; and average absolute relative difference: 4.02% and computation efficiency of NDVI-LMGM (31 seconds using a personal computer. Experiments over more complex landscape and long-term time-series demonstrated that NDVI-LMGM performs well in each stage of vegetation growing season and is robust in regions with contrasting spatial and spatial variations. Comparisons between NDVI-LMGM and current methods (i.e., Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM, Enhanced STARFM (ESTARFM and Weighted Linear Model (WLM show that NDVI-LMGM is more accurate and efficient than current methods. The proposed method will benefit land surface process research, which requires a dense NDVI time-series dataset with high spatial resolution.

  15. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  16. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  17. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  18. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  19. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  20. Analysis of time- and space-resolved Na-, Ne-, and F-like emission from a laser-produced bromine plasma

    International Nuclear Information System (INIS)

    Goldstein, W.H.; Young, B.K.F.; Osterheld, A.L.; Stewart, R.E.; Walling, R.S.; Bar-Shalom, A.

    1991-01-01

    Advances in the efficiency and accuracy of computational atomic physics and collisional radiative modeling promise to place the analysis and diagnostic application of L-shell emission on a par with the simpler K-shell regime. Coincident improvements in spectroscopic plasma measurements yield optically thin emission spectra from small, homogeneous regions of plasma, localized both in space and time. Together, these developments can severely test models for high-density, high-temperature plasma formation and evolution, and non-LTE atomic kinetics. In this paper we present highly resolved measurements of n=3 to n=2 X-ray line emission from a laser-produced bromine micro plasma. The emission is both space- and time-resolved, allowing us to apply simple, steady-state, 0-dimensional spectroscopic models to the analysis. These relativistic, multi-configurational, distorted wave collisional-radiative models were created using the HULLAC atomic physics package. Using these models, we have analyzed the F-like, Ne-like and Na-like (satellite) spectra with respect to temperature, density and charge-state distribution. This procedure leads to a full characterization of the plasma conditions. 9 refs., 3 figs

  1. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Representing Single-Subject Multivariate Time-Series

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Nesselroade, J.R.

    1998-01-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM),

  2. Multiplex real-time PCR assays for detection of eight Shiga toxin-producing Escherichia coli in food samples by melting curve analysis.

    Science.gov (United States)

    Singh, Prashant; Mustapha, Azlin

    2015-12-23

    Shiga toxin-producing Escherichia coli (STEC) are pathogenic strains of E. coli that can cause bloody diarrhea and kidney failure. Seven STEC serogroups, O157, O26, O45, O103, O111, O121 and O145 are responsible for more than 71% of the total infections caused by this group of pathogens. All seven serogroups are currently considered as adulterants in non-intact beef products in the U.S. In this study, two multiplex melt curve real-time PCR assays with internal amplification controls (IACs) were standardized for the detection of eight STEC serogroups. The first multiplex assay targeted E. coli serogroups O145, O121, O104, and O157; while the second set detected E. coli serogroups O26, O45, O103 and O111. The applicability of the assays was tested using 11 different meat and produce samples. For food samples spiked with a cocktail of four STEC serogroups with a combined count of 10 CFU/25 g food, all targets of the multiplex assays were detected after an enrichment period of 6h. The assays also worked efficiently when 325 g of food samples were spiked with 10 CFU of STECs. The assays are not dependent on fluorescent-labeled probes or immunomagnetic beads, and can be used for the detection of eight STEC serogroups in less than 11h. Routine preliminary screening of STECs in food samples is performed by testing for the presence of STEC virulence genes. The assays developed in this study can be useful as a first- or second-tier test for the identification of the eight O serogroup-specific genes in suspected food samples. Copyright © 2015. Published by Elsevier B.V.

  3. An optimized work-flow to reduce time-to-detection of carbapenemase-producing Enterobacteriaceae (CPE) using direct testing from rectal swabs.

    Science.gov (United States)

    O'Connor, C; Kiernan, M G; Finnegan, C; O'Hara, M; Power, L; O'Connell, N H; Dunne, C P

    2017-05-04

    Rapid detection of patients with carbapenemase-producing Enterobacteriaceae (CPE) is essential for the prevention of nosocomial cross-transmission, allocation of isolation facilities and to protect patient safety. Here, we aimed to design a new laboratory work-flow, utilizing existing laboratory resources, in order to reduce time-to-diagnosis of CPE. A review of the current CPE testing processes and of the literature was performed to identify a real-time commercial polymerase chain reaction (PCR) assay that could facilitate batch testing of CPE clinical specimens, with adequate CPE gene coverage. Stool specimens (210) were collected; CPE-positive inpatients (n = 10) and anonymized community stool specimens (n = 200). Rectal swabs (eSwab™) were inoculated from collected stool specimens and a manual DNA extraction method (QIAamp® DNA Stool Mini Kit) was employed. Extracted DNA was then processed on the Check-Direct CPE® assay. The three step process of making the eSwab™, extracting DNA manually and running the Check-Direct CPE® assay, took method of CPE screening; average time-to-diagnosis of 48/72 h. Utilizing this CPE work-flow would allow a 'same-day' result. Antimicrobial susceptibility testing results, as is current practice, would remain a 'next-day' result. In conclusion, the Check-Direct CPE® assay was easily integrated into a local laboratory work-flow and could facilitate a large volume of CPE screening specimens in a single batch, making it cost-effective and convenient for daily CPE testing.

  4. X-ray short-time lags in the Fe-K energy band produced by scattering clouds in active galactic nuclei

    Science.gov (United States)

    Mizumoto, Misaki; Done, Chris; Hagino, Kouichi; Ebisawa, Ken; Tsujimoto, Masahiro; Odaka, Hirokazu

    2018-05-01

    X-rays illuminating the accretion disc in active galactic nuclei give rise to an iron K line and its associated reflection spectrum which are lagged behind the continuum variability by the light-travel time from the source to the disc. The measured lag timescales in the iron band can be as short as ˜Rg/c, where Rg is the gravitational radius, which is often interpreted as evidence for a very small continuum source close to the event horizon of a rapidly spinning black hole. However, the short lags can also be produced by reflection from more distant material, because the primary photons with no time-delay dilute the time-lags caused by the reprocessed photons. We perform a Monte-Carlo simulation to calculate the dilution effect in the X-ray reverberation lags from a half-shell of neutral material placed at 100 Rg from the central source. This gives lags of ˜2 Rg/c, but the iron line is a distinctly narrow feature in the lag-energy plot, whereas the data often show a broader line. We show that both the short lag and the line broadening can be reproduced if the scattering material is outflowing at ˜0.1c. The velocity structure in the wind can also give shifts in the line profile in the lag-energy plot calculated at different frequencies. Hence we propose that the observed broad iron reverberation lags and shifts in profile as a function of frequency of variability can arise from a disc wind at fairly large distances from the X-ray source.

  5. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  6. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  7. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  8. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  9. Maximum margin semi-supervised learning with irrelevant data.

    Science.gov (United States)

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright

  10. Acceptability and feasibility of a low-cost, theory-based and co-produced intervention to reduce workplace sitting time in desk-based university employees.

    Science.gov (United States)

    Mackenzie, Kelly; Goyder, Elizabeth; Eves, Francis

    2015-12-24

    Prolonged sedentary time is linked with poor health, independent of physical activity levels. Workplace sitting significantly contributes to sedentary time, but there is limited research evaluating low-cost interventions targeting reductions in workplace sitting. Current evidence supports the use of multi-modal interventions developed using participative approaches. This study aimed to explore the acceptability and feasibility of a low-cost, co-produced, multi-modal intervention to reduce workplace sitting. The intervention was developed with eleven volunteers from a large university department in the UK using participative approaches and "brainstorming" techniques. Main components of the intervention included: emails suggesting ways to "sit less" e.g. walking and standing meetings; free reminder software to install onto computers; social media to increase awareness; workplace champions; management support; and point-of-decision prompts e.g. by lifts encouraging stair use. All staff (n = 317) were invited to take part. Seventeen participated in all aspects of the evaluation, completing pre- and post-intervention sitting logs and questionnaires. The intervention was delivered over four weeks from 7th July to 3rd August 2014. Pre- and post-intervention difference in daily workplace sitting time was presented as a mean ± standard deviation. Questionnaires were used to establish awareness of the intervention and its various elements, and to collect qualitative data regarding intervention acceptability and feasibility. Mean baseline sitting time of 440 min/workday was reported with a mean reduction of 26 ± 54 min/workday post-intervention (n = 17, 95 % CI = -2 to 53). All participants were aware of the intervention as a whole, although there was a range of awareness for individual elements of the intervention. The intervention was generally felt to be both acceptable and feasible. Management support was perceived to be a strength, whilst specific

  11. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  12. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  13. Real-time PCR and enzyme-linked fluorescent assay methods for detecting Shiga-toxin-producing Escherichia coli in mincemeat samples.

    Science.gov (United States)

    Stefan, A; Scaramagli, S; Bergami, R; Mazzini, C; Barbanera, M; Perelle, S; Fach, P

    2007-03-01

    This work aimed to compare real-time polymerase chain reaction (PCR) with the commercially available enzyme-linked fluorescent assay (ELFA) VIDAS ECOLI O157 for detecting Escherichia coli O157 in mincemeat. In addition, a PCR-based survey on Shiga-toxin-producing E. coli (STEC) in mincemeat collected in Italy is presented. Real-time PCR assays targeting the stx genes and a specific STEC O157 sequence (SILO157, a small inserted locus of STEC O157) were tested for their sensitivity on spiked mincemeat samples. After overnight enrichment, the presence of STEC cells could be clearly determined in the 25 g samples containing 10 bacterial cells, while the addition of five bacteria provided equivocal PCR results with Ct values very close to or above the threshold of 40. The PCR tests proved to be more sensitive than the ELFA-VIDAS ECOLI O157, whose detection level started from 50 bacterial cells/25 g of mincemeat. The occurrence of STEC in 106 mincemeat (bovine, veal) samples collected from September to November 2004 at five different points of sale in Italy (one point of sale in Arezzo, Tuscany, central Italy, two in Mantova, Lombardy, Northern Italy, and two in Bologna, Emilia-Romagna, upper-central Italy) was less than 1%. Contamination by the main STEC O-serogroups representing a major public health concern, including O26, O91, O111, O145, and O157, was not detected. This survey indicates that STEC present in these samples are probably not associated with pathogenesis in humans.

  14. Influence of milling time on microstructure and magnetic properties of Fe{sub 80}P{sub 11}C{sub 9} alloy produced by mechanical alloying

    Energy Technology Data Exchange (ETDEWEB)

    Taghvaei, A.H. [Department of Materials Science and Engineering, Shiraz University of Technology, Shiraz (Iran, Islamic Republic of); Ghajari, F., E-mail: fati.ghajari@gmail.com [Department of Materials Science and Engineering, Shiraz University, Shiraz (Iran, Islamic Republic of); Markó, D. [IFW Dresden, Institute for Complex Materials, Helmholtzstr. 20, 01069 Dresden (Germany); Prashanth, K.G. [IFW Dresden, Institute for Complex Materials, Helmholtzstr. 20, 01069 Dresden (Germany); Additive manufacturing Center, Sandvik AB, 81181 Sandviken (Sweden)

    2015-12-01

    Fe{sub 80}P{sub 11}C{sub 9} alloy with amorphous/nanocrytalline microstructure has been synthesized by mechanical alloying of the elemental powders. The microstructure, thermal behavior and morphology of the produced powders have been studied by X-ray diffraction (XRD), differential scanning calorimetry (DSC) and scanning electron microscopy (SEM), respectively. The crystallite size, lattice strain and fraction of the amorphous phase have been calculated by Rietveld refinement method. The results indicate that the powders microstructure consists of α-Fe(P,C) nanocrystals with an average diameter of 9 nm±1 nm dispersed in the amorphous matrix after 90 h of milling. Moreover, the fraction of amorphous phase initially increases up to 90 h of milling and then decreases after 120 h of milling, as a result of mechanical crystallization and formation of Fe{sub 2}P phase. The magnetic measurements show that while the saturation magnetization decreases continuously with the milling time, the coercivity exhibits a complicated trend. The correlation between microstructural changes and magnetic properties has been discussed in detail. - Highlights: • Glass formation was investigated in Fe{sub 80}P{sub 11}C{sub 9} by mechanical alloying. • Structural parameters were calculated by Rietveld refinement method. • Milling first increased and then decreased the fraction of amorphous phase. • Magnetic properties were significantly changed upon milling.

  15. Real-time genomic investigation underlying the public health response to a Shiga toxin-producing Escherichia coli O26:H11 outbreak in a nursery.

    Science.gov (United States)

    Moran-Gilad, J; Rokney, A; Danino, D; Ferdous, M; Alsana, F; Baum, M; Dukhan, L; Agmon, V; Anuka, E; Valinsky, L; Yishay, R; Grotto, I; Rossen, J W A; Gdalevich, M

    2017-10-01

    Shiga toxin-producing Escherichia coli (STEC) is a significant cause of gastrointestinal infection and the haemolytic-uremic syndrome (HUS). STEC outbreaks are commonly associated with food but animal contact is increasingly being implicated in its transmission. We report an outbreak of STEC affecting young infants at a nursery in a rural community (three HUS cases, one definite case, one probable case, three possible cases and five carriers, based on the combination of clinical, epidemiological and laboratory data) identified using culture-based and molecular techniques. The investigation identified repeated animal contact (animal farming and petting) as a likely source of STEC introduction followed by horizontal transmission. Whole genome sequencing (WGS) was used for real-time investigation of the incident and revealed a unique strain of STEC O26:H11 carrying stx2a and intimin. Following a public health intervention, no additional cases have occurred. This is the first STEC outbreak reported from Israel. WGS proved as a useful tool for rapid laboratory characterization and typing of the outbreak strain and informed the public health response at an early stage of this unusual outbreak.

  16. Irreversible and endoreversible behaviors of the LD-model for heat devices: the role of the time constraints and symmetries on the performance at maximum χ figure of merit

    Science.gov (United States)

    Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.

    2016-07-01

    The main unified energetic properties of low dissipation heat engines and refrigerator engines allow for both endoreversible or irreversible configurations. This is accomplished by means of the constraints imposed on the characteristic global operation time or the contact times between the working system with the external heat baths and modulated by the dissipation symmetries. A suited unified figure of merit (which becomes power output for heat engines) is analyzed and the influence of the symmetries on the optimum performance discussed. The obtained results, independent on any heat transfer law, are faced with those obtained from Carnot-like heat models where specific heat transfer laws are needed. Thus, it is shown that only the inverse phenomenological law, often used in linear irreversible thermodynamics, correctly reproduces all optimized values for both the efficiency and coefficient of performance values.

  17. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  18. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  19. Multiplex real-time PCR assay for detection of Escherichia coli O157:H7 and screening for non-O157 Shiga toxin-producing E. coli.

    Science.gov (United States)

    Li, Baoguang; Liu, Huanli; Wang, Weimin

    2017-11-09

    Shiga toxin-producing Escherichia coli (STEC), including E. coli O157:H7, are responsible for numerous foodborne outbreaks annually worldwide. E. coli O157:H7, as well as pathogenic non-O157:H7 STECs, can cause life-threating complications, such as bloody diarrhea (hemolytic colitis) and hemolytic-uremic syndrome (HUS). Previously, we developed a real-time PCR assay to detect E. coli O157:H7 in foods by targeting a unique putative fimbriae protein Z3276. To extend the detection spectrum of the assay, we report a multiplex real-time PCR assay to specifically detect E. coli O157:H7 and screen for non-O157 STEC by targeting Z3276 and Shiga toxin genes (stx1 and stx2). Also, an internal amplification control (IAC) was incorporated into the assay to monitor the amplification efficiency. The multiplex real-time PCR assay was developed using the Life Technology ABI 7500 System platform and the standard chemistry. The optimal amplification mixture of the assay contains 12.5 μl of 2 × Universal Master Mix (Life Technology), 200 nM forward and reverse primers, appropriate concentrations of four probes [(Z3276 (80 nM), stx1 (80 nM), stx2 (20 nM), and IAC (40 nM)], 2 μl of template DNA, and water (to make up to 25 μl in total volume). The amplification conditions of the assay were set as follows: activation of TaqMan at 95 °C for 10 min, then 40 cycles of denaturation at 95 °C for 10 s and annealing/extension at 60 °C for 60 s. The multiplex assay was optimized for amplification conditions. The limit of detection (LOD) for the multiplex assay was determined to be 200 fg of bacterial DNA, which is equivalent to 40 CFU per reaction which is similar to the LOD generated in single targeted PCRs. Inclusivity and exclusivity determinants were performed with 196 bacterial strains. All E. coli O157:H7 (n = 135) were detected as positive and all STEC strains (n = 33) were positive for stx1, or stx2, or stx1 and stx2 (Table 1). No cross reactivity was detected with Salmonella

  20. Selection of Suitable Internal Control Genes for Accurate Normalization of Real-Time Quantitative PCR Data of Buffalo (Bubalus bubalis) Blastocysts Produced by SCNT and IVF.

    Science.gov (United States)

    Sood, Tanushri Jerath; Lagah, Swati Viviyan; Sharma, Ankita; Singla, Suresh Kumar; Mukesh, Manishi; Chauhan, Manmohan Singh; Manik, Radheysham; Palta, Prabhat

    2017-10-01

    We evaluated the suitability of 10 candidate internal control genes (ICGs), belonging to different functional classes, namely ACTB, EEF1A1, GAPDH, HPRT1, HMBS, RPS15, RPS18, RPS23, SDHA, and UBC for normalizing the real-time quantitative polymerase chain reaction (qPCR) data of blastocyst-stage buffalo embryos produced by hand-made cloning and in vitro fertilization (IVF). Total RNA was isolated from three pools, each of cloned and IVF blastocysts (n = 50/pool) for cDNA synthesis. Two different statistical algorithms geNorm and NormFinder were used for evaluating the stability of these genes. Based on gene stability measure (M value) and pairwise variation (V value), calculated by geNorm analysis, the most stable ICGs were RPS15, HPRT1, and ACTB for cloned blastocysts, HMBS, UBC, and HPRT1 for IVF blastocysts and RPS15, GAPDH, and HPRT1 for both the embryo types analyzed together. RPS18 was the least stable gene for both cloned and IVF blastocysts. Following NormFinder analysis, the order of stability was RPS15 = HPRT1>GAPDH for cloned blastocysts, HMBS = UBC>RPS23 for IVF blastocysts, and HPRT1>GAPDH>RPS15 for cloned and IVF blastocysts together. These results suggest that despite overlapping of the three most stable ICGs between cloned and IVF blastocysts, the panel of ICGs selected for normalization of qPCR data of cloned and IVF blastocyst-stage embryos should be different.

  1. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  2. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  3. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  4. Development of Quantitative Competitive PCR and Absolute Based Real-Time PCR Assays for Quantification of The Butyrate Producing Bacterium: Butyrivibrio fibrisolvens

    Directory of Open Access Journals (Sweden)

    Mojtaba Tahmoorespur

    2016-04-01

    Full Text Available Introduction Butyrivibrio fibrisolvens strains are presently recognized as the major butyrate-producing bacteria found in the rumen and digestive track of many animals and also in the human gut. In this study we reported the development of two DNA based techniques, quantitative competitive (QC PCR and absolute based Real-Time PCR, for enumerating Butyrivibrio fibrisolvens strains. Despite the recent introduction of real-time PCR method for the rapid quantification of the target DNA sequences, use of quantitative competitive PCR (QC-PCR technique continues to play an important role in nucleic acid quantification since it is more cost effective. The procedure relies on the co-amplification of the sequence of interest with a serially diluted synthetic DNA fragment of the known concentration (competitor, using the single set primers. A real-time polymerase chain reaction is a laboratory technique of molecular biology based on the polymerase chain reaction (PCR. It monitors the amplification of a targeted DNA molecule during the PCR. Materials and Methods At first reported species-specific primers targeting the 16S rDNA region of the bacterium Butyrivibrio fibrisolvens were used for amplifying a 213 bp fragment. A DNA competitor differing by 50 bp in length from the 213 bp fragment was constructed and cloned into pTZ57R/T vector. The competitor was quantified by NanoDrop spectrophotometer and serially diluted and co-amplified by PCR with total extracted DNA from rumen fluid samples. PCR products were quantified by photographing agarose gels and analyzed with Image J software and the amount of amplified target DNA was log plotted against the amount of amplified competitor. Coefficient of determination (R2 was used as a criterion of methodology precision. For developing the Real-time PCR technique, the 213 bp fragment was amplified and cloned into pTZ57R/T was used to draw a standard curve. Results and Discussion The specific primers of Butyrivibrio

  5. Does combined strength training and local vibration improve isometric maximum force? A pilot study.

    Science.gov (United States)

    Goebel, Ruben; Haddad, Monoem; Kleinöder, Heinz; Yue, Zengyuan; Heinen, Thomas; Mester, Joachim

    2017-01-01

    The aim of the study was to determine whether a combination of strength training (ST) and local vibration (LV) improved the isometric maximum force of arm flexor muscles. ST was applied to the left arm of the subjects; LV was applied to the right arm of the same subjects. The main aim was to examine the effect of LV during a dumbbell biceps curl (Scott Curl) on isometric maximum force of the opposite muscle among the same subjects. It is hypothesized, that the intervention with LV produces a greater gain in isometric force of the arm flexors than ST. Twenty-seven collegiate students participated in the study. The training load was 70% of the individual 1 RM. Four sets with 12 repetitions were performed three times per week during four weeks. The right arm of all subjects represented the vibration trained body side (VS) and the left arm served as the traditional trained body side (TTS). A significant increase of isometric maximum force in both body sides (Arms) occurred. VS, however, significantly increased isometric maximum force about 43% in contrast to 22% of the TTS. The combined intervention of ST and LC improves isometric maximum force of arm flexor muscles. III.

  6. Validation of the Thermo Scientific SureTect Escherichia coli O157:H7 Real-Time PCR Assay for Raw Beef and Produce Matrixes.

    Science.gov (United States)

    Cloke, Jonathan; Crowley, Erin; Bird, Patrick; Bastin, Ben; Flannery, Jonathan; Agin, James; Goins, David; Clark, Dorn; Radcliff, Roy; Wickstrand, Nina; Kauppinen, Mikko

    2015-01-01

    The Thermo Scientific™ SureTect™ Escherichia coli O157:H7 Assay is a new real-time PCR assay which has been validated through the AOAC Research Institute (RI) Performance Tested Methods(SM) program for raw beef and produce matrixes. This validation study specifically validated the assay with 375 g 1:4 and 1:5 ratios of raw ground beef and raw beef trim in comparison to the U.S. Department of Agriculture, Food Safety Inspection Service, Microbiology Laboratory Guidebook (USDS-FSIS/MLG) reference method and 25 g bagged spinach and fresh apple juice at a ratio of 1:10, in comparison to the reference method detailed in the International Organization for Standardization 16654:2001 reference method. For raw beef matrixes, the validation of both 1:4 and 1:5 allows user flexibility with the enrichment protocol, although which of these two ratios chosen by the laboratory should be based on specific test requirements. All matrixes were analyzed by Thermo Fisher Scientific, Microbiology Division, Vantaa, Finland, and Q Laboratories Inc, Cincinnati, Ohio, in the method developer study. Two of the matrixes (raw ground beef at both 1:4 and 1:5 ratios) and bagged spinach were additionally analyzed in the AOAC-RI controlled independent laboratory study, which was conducted by Marshfield Food Safety, Marshfield, Wisconsin. Using probability of detection statistical analysis, no significant difference was demonstrated by the SureTect kit in comparison to the USDA FSIS reference method for raw beef matrixes, or with the ISO reference method for matrixes of bagged spinach and apple juice. Inclusivity and exclusivity testing was conducted with 58 E. coli O157:H7 and 54 non-E. coli O157:H7 isolates, respectively, which demonstrated that the SureTect assay was able to detect all isolates of E. coli O157:H7 analyzed. In addition, all but one of the nontarget isolates were correctly interpreted as negative by the SureTect Software. The single isolate giving a positive result was an E

  7. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  8. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...

  9. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  10. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  11. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  12. The radial distribution of cosmic rays in the heliosphere at solar maximum

    Science.gov (United States)

    McDonald, F. B.; Fujii, Z.; Heikkila, B.; Lal, N.

    2003-08-01

    To obtain a more detailed profile of the radial distribution of galactic (GCRs) and anomalous (ACRs) cosmic rays, a unique time in the 11-year solar activity cycle has been selected - that of solar maximum. At this time of minimum cosmic ray intensity a simple, straight-forward normalization technique has been found that allows the cosmic ray data from IMP 8, Pioneer 10 (P-10) and Voyagers 1 and 2 (V1, V2) to be combined for the solar maxima of cycles 21, 22 and 23. This combined distribution reveals a functional form of the radial gradient that varies as G 0/r with G 0 being constant and relatively small in the inner heliosphere. After a transition region between ˜10 and 20 AU, G 0 increases to a much larger value that remains constant between ˜25 and 82 AU. This implies that at solar maximum the changes that produce the 11-year modulation cycle are mainly occurring in the outer heliosphere between ˜15 AU and the termination shock. These observations are not inconsistent with the concept that Global Merged Interaction. regions (GMIRs) are the principal agent of modulation between solar minimum and solar maximum. There does not appear to be a significant change in the amount of heliosheath modulation occurring between the 1997 solar minimum and the cycle 23 solar maximum.

  13. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    Science.gov (United States)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  14. Producing cement

    Energy Technology Data Exchange (ETDEWEB)

    Stone, E G

    1923-09-12

    A process and apparatus are described for producing Portland cement in which pulverized shale is successively heated in a series of inclined rotary retorts having internal stirrers and oil gas outlets, which are connected to condensers. The partially treated shale is removed from the lowermost retort by a conveyor, then fed separately or conjointly into pipes and thence into a number of vertically disposed retorts. Each of these retorts may be fitted interiorly with vertical arranged conveyors which elevate the shale and discharge it over a lip, from whence it falls to the bottom of the retorts. The lower end of each casing is furnished with an adjustable discharge door through which the spent shale is fed to a hopper, thence into separate trucks. The oil gases generated in the retorts are exhausted through pipes to condensers. The spent shale is conveyed to a bin and mixed while hot with ground limestone. The admixed materials are then ground and fed to a rotary kiln which is fired by the incondensible gases derived from the oil gases obtained in the previous retorting of the shale. The calcined materials are then delivered from the rotary kiln to rotary coolers. The waste gases from the kiln are utilized for heating the retorts in which the ground shale is heated for the purpose of extracting therefrom the contained hydrocarbon oils and gases.

  15. Using MathWorks' Simulink® and Real-Time Workshop® Code Generator to Produce Attitude Control Test and Flight Code

    OpenAIRE

    Salada, Mark; Dellinger, Wayne

    1998-01-01

    This paper describes the use of a commercial product, MathWorks' RealTime Workshop® (RTW), to generate actual flight code for NASA's Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) mission. The Johns Hopkins University Applied Physics Laboratory is handling the design and construction of this satellite for NASA. As TIMED is scheduled to launch in May of the year 2000, software development for both ground and flight systems are well on their way. However, based on experien...

  16. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  17. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  18. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  19. The capability of alfalfa grazing- and concentrate-based feeding systems to produce homogeneous carcass quality in light lambs over time

    Energy Technology Data Exchange (ETDEWEB)

    Ripoll, G.; Alvarez-Rodriguez, J.; Sanz, A.; Joy, M.

    2014-06-01

    The effects of grazing on the carcasses and meat of light lambs are unclear, mainly due to variations in weather conditions and pasture production, which affect the growth of lambs and the quality of their carcasses. The aim of this study was to evaluate the effect of feeding systems, which varied in intensification due to the use of concentrate, on the growth and carcass traits of light lambs and the capability of these feeding systems to produce homogeneous lamb carcasses over the course of several years. The average daily weight gain of grazing lambs, but not lambs fed indoors was affected over years. The colour of the Rectus abdominis muscle and the amount of fat were more variable in grazing lambs (from 2.7 to 6.3) than indoor lambs (from 4.5 to 5.1). Grazing feeding systems without concentrate supplementation are more dependent than indoor feeding systems on the year. This climatologic dependence could lead to slaughter of older grazing lambs (77 days) to achieve the target slaughter weight when temperatures are low or the rainfall great. All feeding systems evaluated produced light lambs carcasses with a conformation score from O to R that is required by the market. Even the potential change in fat colour found in both grazing treatments was not enough to change the subjective evaluation of fat colour. (Author)

  20. Study of the X-ray binary AM Herculis. II - Spectrophotometry at maximum light

    International Nuclear Information System (INIS)

    Voikhanskaia, N.F.

    1980-01-01

    The spectrum of the AM Her system at maximum light is analyzed, and a comparison is made between the spectra when the system is at different levels of brightness. At maximum light the equivalent line widths fluctuate rapidly on a time scale of about 1 min at all phases of the orbit period. As the brightness drops, the system becomes less strongly excited consequently, the high-excitation elements represented in the spectrum first fade and then vanish. At maximum light the bulk of the radiation comes from the hottest and densest parts of the luminous region. As the light wanes the contribution of their radiation to the total light of the system diminishes, and the radiation of the cooler, more tenuous parts of the emission region becomes perceptible. In addition, the pronounced change in the shape of the emission-line profiles during the orbital period at minimum light implies a considerable amount of irregularity in the region producing the lines, unlike the uniform emission region at maximum light

  1. Producing the Spielberg Brand

    OpenAIRE

    Russell, J.

    2016-01-01

    This chapter looks at the manufacture of Spielberg’s brand, and the limits of its usage. Spielberg’s directorial work is well known, but Spielberg’s identity has also been established in other ways, and I focus particularly on his work as a producer. At the time of writing, Spielberg had produced (or executive produced) 148 movies and television series across a range of genres that takes in high budget blockbusters and low budget documentaries, with many more to come. In these texts, Spielber...

  2. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  3. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  4. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  5. Assessment of the Čerenkov light produced in a PbWO4 crystal by means of the study of the time structure of the signal

    CERN Document Server

    Cavallini, L; Mecca, A; Pinci, D; Akchurin, N; Berntzon, L; Kim, H; Roh, Y; Wigmans, R; Cardini, A; Ferrari, R; Franchino, S; Gaudio, G; Livan, M; Hauptman, J; La Rotonda, L; Meoni, E; Policicchio, A; Susinno, G; Paar, H; Penzo, A; Popescu, S; Vandelli, W

    2008-01-01

    On beam tests were carried out on PbWO4 crystals. One of the aims of this work was to evaluate the contribution of the Čerenkov component to the total light yield. The difference in the timing characteristics of the fast Čerenkov signals with respect to the scintillation ones, which are emitted with a decay time of about 10 ns, can be exploited in order to separate the two proportions. In this paper we present the results of an analysis performed on the time structure of signals, showing how it is possible to detect and assess the presence and the amount of Čerenkov light. Since Čerenkov light is emitted only by the electromagnetic component of a hadronic shower, its precise measurement would allow to account for one of the dominant sources of fluctuations in hadronic showers and to achieve an improvement in the energy resolution of a hadronic calorimeter.

  6. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  7. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  8. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  9. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  10. Elemental composition of cosmic rays using a maximum likelihood method

    International Nuclear Information System (INIS)

    Ruddick, K.

    1996-01-01

    We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)

  11. Minimal recovery time needed to return to social engagement following nasolabial fold correction with hyaluronic acid fillers produced with XpresHAn technology.

    Science.gov (United States)

    Swift, Arthur; von Grote, Erika; Jonas, Brandie; Nogueira, Alessandra

    2017-01-01

    The appeal of hyaluronic acid fillers for facial soft tissue augmentation is attributable to both an immediate aesthetic effect and relatively short recovery time. Although recovery time is an important posttreatment variable, as it impacts comfort with appearance and perceived treatment benefit, it is not routinely evaluated. Natural-looking aesthetic outcomes are also a primary concern for many patients. A single-center, noncomparative study evaluated the time (in hours) until subjects return to social engagement (RtSE) following correction of moderate and severe nasolabial folds (NLFs) with R R (Restylane ® Refyne) ® and R D (Restylane Defyne), respectively. Twenty subjects (aged 35-57 years) who received bilateral NLF correction documented their RtSE and injection-related events posttreatment. Treatment efficacy was evaluated by improvements in Wrinkle Severity Rating Scale (WSRS) and subject satisfaction questionnaire at days 14 and 30, and by Global Aesthetic Improvement Scale (GAIS) at day 30. Safety was evaluated by injection-related events and treatment-emergent adverse events. Fifty percent of subjects reported RtSE within 2 hours posttreatment. WSRS for the R R group improved significantly from baseline at day 14 (-1.45±0.42) and day 30 (-1.68±0.46) ( P experienced 3 related treatment-emergent adverse events; 1 R R subject experienced severe bruising, and 1 R D subject experienced severe erythema and mild telangiectasia. Subject satisfaction was high regarding aesthetic outcomes and natural-looking results. Optimal correction of moderate NLFs with R R and severe NLFs with R D involved minimal time to RtSE for most subjects. Treatments that significantly improved WSRS and GAIS, were generally well-tolerated, and provided natural-looking aesthetic outcomes.

  12. Screening of endophytic sources of exopolysaccharides: Preliminary characterization of crude exopolysaccharide produced by submerged culture of Diaporthe sp. JF766998 under different cultivation time

    Directory of Open Access Journals (Sweden)

    Ravely Casarotti Orlandelli

    2016-06-01

    Full Text Available Endophytic fungi have been described as producers of important bioactive compounds; however, they remain under-exploited as exopolysaccharides (EPS sources. Therefore, this work reports on EPS production by submerged cultures of eight endophytes isolated from Piper hispidum Sw., belonging to genera Diaporthe, Marasmius, Phlebia, Phoma, Phyllosticta and Schizophyllum. After fermentation for 96 h, four endophytes secreted EPS: Diaporthe sp. JF767000, Diaporthe sp. JF766998, Diaporthe sp. JF767007 and Phoma herbarum JF766995. The EPS from Diaporthe sp. JF766998 differed statistically from the others, with a higher percentage of carbohydrate (91% and lower amount of protein (8%. Subsequently, this fungus was grown under submerged culture for 72, 96 and 168 h (these EPS were designated EPSD1-72, EPSD1-96 and EPSD1-168 and the differences in production, monosaccharide composition and apparent molecular were compared. The EPS yields in mg/100 mL of culture medium were: 3.0 ± 0.4 (EPSD1-72, 15.4 ± 2.2 (EPSD1-96 and 14.8 ± 1.8 (EPSD1-168. The EPSD1-72 had high protein content (28.5% and only 71% of carbohydrate; while EPSD1-96 and EPSD1-168 were composed mainly of carbohydrate (≈95 and 100%, respectively, with low protein content (≈5% detected at 96 h. Galactose was the main monosaccharide component (30% of EPSD1-168. Differently, EPSD1-96 was rich in glucose (51%, with molecular weight of 46.6 kDa. It is an important feature for future investigations, because glucan-rich EPS are reported as effective antitumor agents.

  13. Multifield stochastic particle production: beyond a maximum entropy ansatz

    Energy Technology Data Exchange (ETDEWEB)

    Amin, Mustafa A.; Garcia, Marcos A.G.; Xie, Hong-Yi; Wen, Osmond, E-mail: mustafa.a.amin@gmail.com, E-mail: marcos.garcia@rice.edu, E-mail: hxie39@wisc.edu, E-mail: ow4@rice.edu [Physics and Astronomy Department, Rice University, 6100 Main Street, Houston, TX 77005 (United States)

    2017-09-01

    We explore non-adiabatic particle production for N {sub f} coupled scalar fields in a time-dependent background with stochastically varying effective masses, cross-couplings and intervals between interactions. Under the assumption of weak scattering per interaction, we provide a framework for calculating the typical particle production rates after a large number of interactions. After setting up the framework, for analytic tractability, we consider interactions (effective masses and cross couplings) characterized by series of Dirac-delta functions in time with amplitudes and locations drawn from different distributions. Without assuming that the fields are statistically equivalent, we present closed form results (up to quadratures) for the asymptotic particle production rates for the N {sub f}=1 and N {sub f}=2 cases. We also present results for the general N {sub f} >2 case, but with more restrictive assumptions. We find agreement between our analytic results and direct numerical calculations of the total occupation number of the produced particles, with departures that can be explained in terms of violation of our assumptions. We elucidate the precise connection between the maximum entropy ansatz (MEA) used in Amin and Baumann (2015) and the underlying statistical distribution of the self and cross couplings. We provide and justify a simple to use (MEA-inspired) expression for the particle production rate, which agrees with our more detailed treatment when the parameters characterizing the effective mass and cross-couplings between fields are all comparable to each other. However, deviations are seen when some parameters differ significantly from others. We show that such deviations become negligible for a broad range of parameters when N {sub f}>> 1.

  14. Transient stresses al Parkfield, California, produced by the M 7.4 Landers earthquake of June 28, 1992: implications for the time-dependence of fault friction

    Directory of Open Access Journals (Sweden)

    J. B. Fletcher

    1994-06-01

    Full Text Available he M 7.4 Landers earthquake triggered widespread seismicity in the Western U.S. Because the transient dynamic stresses induced at regional distances by the Landers surface waves are much larger than the expected static stresses, the magnitude and the characteristics of the dynamic stresses may bear upon the earthquake triggering mechanism. The Landers earthquake was recorded on the UPSAR array, a group of 14 triaxial accelerometers located within a 1-square-km region 10 km southwest of the town of Parkfield, California, 412 km northwest of the Landers epicenter. We used a standard geodetic inversion procedure to determine the surface strain and stress tensors as functions of time from the observed dynamic displacements. Peak dynamic strains and stresses at the Earth's surface are about 7 microstrain and 0.035 MPa, respectively, and they have a flat amplitude spectrum between 2 s and 15 s period. These stresses agree well with stresses predicted from a simple rule of thumb based upon the ground velocity spectrum observed at a single station. Peak stresses ranged from about 0.035 MPa at the surface to about 0.12 MPa between 2 and 14 km depth, with the sharp increase of stress away from the surface resulting from the rapid increase of rigidity with depth and from the influence of surface wave mode shapes. Comparison of Landers-induced static and dynamic stresses at the hypocenter of the Big Bear aftershock provides a clear example that faults are stronger on time scales of tens of seconds than on time scales of hours or longer.

  15. The ideal harvest time for seeds of hybrid maize (Zea mays L.) XY335 and ZD958 produced in multiple environments

    OpenAIRE

    Gu, Riliang; Li, Li; Liang, Xiaolin; Wang, Yanbo; Fan, Tinglu; Wang, Ying; Wang, Jianhua

    2017-01-01

    To identify the ideal harvest time (IHT) for the seed production of XY335 and ZD958, six seed-related traits were evaluated in seeds harvested at 11 harvest stages in 8 environments. Standard germination (SG), accelerated aging germination (AAG) and cold test germination (CTG) were vigor traits; hundred-seed weight (HSW) and seed moisture content (SMC) were physiological traits; and ≥10 °C accumulated temperature from pollination to harvest (AT10ph) was an ecological trait. All the traits wer...

  16. Rapid calculation of maximum particle lifetime for diffusion in complex geometries

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    Diffusion of molecules within biological cells and tissues is strongly influenced by crowding. A key quantity to characterize diffusion is the particle lifetime, which is the time taken for a diffusing particle to exit by hitting an absorbing boundary. Calculating the particle lifetime provides valuable information, for example, by allowing us to compare the timescale of diffusion and the timescale of the reaction, thereby helping us to develop appropriate mathematical models. Previous methods to quantify particle lifetimes focus on the mean particle lifetime. Here, we take a different approach and present a simple method for calculating the maximum particle lifetime. This is the time after which only a small specified proportion of particles in an ensemble remain in the system. Our approach produces accurate estimates of the maximum particle lifetime, whereas the mean particle lifetime always underestimates this value compared with data from stochastic simulations. Furthermore, we find that differences between the mean and maximum particle lifetimes become increasingly important when considering diffusion hindered by obstacles.

  17. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  18. Use of Tissue Culture Techniques for Producing Virus-Free Plant in Garlic and Their Identification through Real-Time PCR

    Directory of Open Access Journals (Sweden)

    Hatıra Taşkın

    2013-01-01

    Full Text Available This study was performed for comparison of meristem culture technique with shoot tip culture technique for obtaining virus-free plant, comparison of micropropagation success of two different nutrient media, and determination of effectiveness of real-time PCR assay for the detection of viruses. Two different garlic species (Allium sativum and Allium tuncelianum and two different nutrient media were used in this experiment. Results showed that Medium 2 was more successful compared to Medium 1 for both A. tuncelianum and A. sativum (Kastamonu garlic clone. In vitro plants obtained via meristem and shoot tip cultures were tested for determination of onion yellow dwarf virus (OYDV and leek yellow stripe virus (LYSV through real-time PCR assay. In garlic plants propagated via meristem culture, we could not detect any virus. OYDV and LYSV viruses were detected in plants obtained via shoot tip culture. OYDV virus was observed in amount of 80% and 73% of tested plants for A. tuncelianum and A. sativum, respectively. LYSV virus was found in amount of 67% of tested plants of A. tuncelianum and in amount of 87% of tested plants of A. sativum in this study.

  19. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  20. Coronary ligation reduces maximum sustained swimming speed in Chinook salmon, Oncorhynchus tshawytscha

    DEFF Research Database (Denmark)

    Farrell, A P; Steffensen, J F

    1987-01-01

    The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...... a statistically significant 35.5% reduction in maximum swimming speed. We conclude that the coronary circulation is important for maximum aerobic swimming and implicit in this conclusion is that maximum cardiac performance is probably necessary for maximum aerobic swimming performance....

  1. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  2. Relation between size-distribution of Si nanoparticles and oscillation-stabilization time of the mixed region produced during laser ablation

    International Nuclear Information System (INIS)

    Wang Yinglong; Li Yanli; Fu Guangsheng

    2006-01-01

    Assuming Si particles and ambient atoms are elastic hard-spheres, the transportation in ambient gas of Si particles obtained by single-pulsed laser ablation is numerically simulated via Monte Carlo method to investigate the influence of the ambient species and the target-to-substrate distance on the oscillation-stabilization time (OST) of the mixed region. It is found that the ambient gas whose atomic weight is close to that of Si atom can induce the shortest OST; with increasing of the target-to-substrate distance, the OST at first decreases to its minimum, and then begins to increase. Incorporating with some experimental results on size-consistency of Si nanoparticles in pulsed laser ablation, it may be concluded that the shorter the OST of the mixed region, the more uniform the as-formed Si nanoparticles in size

  3. Measurement of feline cytokines interleukin-12 and interferon- g produced by heat inducible gene therapy adenoviral vector using real time PCR

    International Nuclear Information System (INIS)

    Siddiqui, F.; Avery, P.R.; Ullrich, R.L.; LaRue, S.M.; Dewhirst, M.W.; Li, C.-Y.

    2003-01-01

    Biologic tumor therapy using Interleukin-12 (IL-12) has shown promise as an adjuvant to radiation therapy. The goals for cancer gene immunotherapy include effective eradication of established tumors and generation of a lasting systemic immune response. Among the cytokines, IL-12 has been found to be most effective gene in eradicating experimental tumors, preventing the development of metastases, and eliciting long-term antitumor immunity. Depending on the tumor model, IL-12 can exert antitumor activities via T cells, NK cells or NKT cells. It induces the production of IFN-g and IFN-inducible protein-10. It is also postulated to have antiangiogenic effects, thus inhibiting tumor formation and metastases. However, its use in clinical trials has been restricted largely owing to its systemic hematologic and hepatotoxicity. We tested the efficacy of adenovirus mediated expression of feline IL-12 gene placed under the control of an inducible promoter, the heat shock proteins (hsp70B). This places gene expression under the control of an external physical agent (hyperthermia), thus offering an 'on-off' switch and potentially reducing systemic toxicity by restricting its expression locally to the tumor. Crandell Feline Kidney (CrFK) cells were infected using the construct and the supernatant was then used to stimulate production of interferon g (IFN-g) in feline peripheral blood mononuclear cells (PBMC). As there is no commercially available ELISA kit currently available to detect or measure feline cytokines, we used real time-PCR to measure cytokine mRNA. These results will be used to initiate a clinical trial in cats with soft tissue sarcomas examining hyperthermia Induced gene therapy in conjunction with radiation therapy. The real time- PCR techniques developed here will be used to quantitatively measure cytokine mRNA levels in the punch biopsy samples obtained from the cats during the clinical trial. Support for this study was in part by NCI grant CA72745

  4. Identification for the First Time of Cyclo(d-Pro-l-Leu Produced by Bacillus amyloliquefaciens Y1 as a Nematocide for Control of Meloidogyne incognita

    Directory of Open Access Journals (Sweden)

    Qaiser Jamal

    2017-10-01

    Full Text Available The aim of the current study was to describe the role and mechanism of Bacillus amyloliquefaciens Y1 against the root-knot nematode, Meloidogyne incognita, under in vitro and in vivo conditions. Initially, the exposure of the bacterial culture supernatant and crude extract of Y1 to M. incognita significantly inhibited the hatching of eggs and caused the mortality of second-stage juveniles (J2, with these inhibitory effects depending on the length of incubation time and concentration of the treatment. The dipeptide cyclo(d-Pro-l-Leu was identified in B. amyloliquefaciens culture for the first time using chromatographic techniques and nuclear magnetic resonance (NMR 1H, 13C, H-H COSY, HSQC, and HMBC and recognized to have nematocidal activity. Various concentrations of cyclo(d-Pro-l-Leu were investigated for their effect on the hatching of eggs and J2 mortality. Moreover, the in vivo nematocidal activity of the Y1 strain was investigated by conducting pot experiments in which tomato plants were inoculated with M. incognita. Each and every pot was amended 50 mL of fertilizer media (F, or Y1 culture, or nematicide (N (only once, or fertilizer media with N (FN at 1, 2, 3, 4 and 5 weeks after transplantation. The results of the pot experiments demonstrated the antagonistic effect of B. amyloliquefaciens Y1 against M. incognita as it significantly decreases the count of eggs and galls per root of the tomato plant as well as the population of J2 in the soil. Besides, the investigation into the growth parameters, such as the length of shoot, shoot fresh and dry weights of the tomato plants, showed that they were significantly higher in the Y1 strain Y1-treated plants compared to F-, FN- and N-treated plants. Therefore, the biocontrol repertoire of this bacterium opens a new insight into the applications in crop pest control.

  5. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  6. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  7. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  8. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  9. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  10. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  11. An overview of the SeaWiFS project and strategies for producing a climate research quality global ocean bio-optical time series

    Science.gov (United States)

    McClain, Charles R.; Feldman, Gene C.; Hooker, Stanford B.

    2004-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project Office was formally initiated at the NASA Goddard Space Flight Center in 1990. Seven years later, the sensor was launched by Orbital Sciences Corporation under a data-buy contract to provide 5 years of science quality data for global ocean biogeochemistry research. To date, the SeaWiFS program has greatly exceeded the mission goals established over a decade ago in terms of data quality, data accessibility and usability, ocean community infrastructure development, cost efficiency, and community service. The SeaWiFS Project Office and its collaborators in the scientific community have made substantial contributions in the areas of satellite calibration, product validation, near-real time data access, field data collection, protocol development, in situ instrumentation technology, operational data system development, and desktop level-0 to level-3 processing software. One important aspect of the SeaWiFS program is the high level of science community cooperation and participation. This article summarizes the key activities and approaches the SeaWiFS Project Office pursued to define, achieve, and maintain the mission objectives. These achievements have enabled the user community to publish a large and growing volume of research such as those contributed to this special volume of Deep-Sea Research. Finally, some examples of major geophysical events (oceanic, atmospheric, and terrestrial) captured by SeaWiFS are presented to demonstrate the versatility of the sensor.

  12. Radiation Resistance and Life Time Estimates at Cryogenic Temperatures of Series Produced By-Pass Diodes for the LHC Magnet Protection

    Science.gov (United States)

    Denz, R.; Gharib, A.; Hagedorn, D.

    2004-06-01

    For the protection of the LHC superconducting magnets about 2100 specially developed by-pass diodes have been manufactured in industry and more than one thousand of these diodes have been mounted into stacks and tested in liquid helium. By-pass diode samples, taken from the series production, have been submitted to irradiation tests at cryogenic temperatures together with some prototype diodes up to an accumulated dose of about 2 kGy and neutron fluences up to about 3.0 1013 n cm-2 with and without intermediate warm up to 300 K. The device characteristics of the diodes under forward bias and reverse bias have been measured at 77 K and ambient versus dose and the results are presented. Using a thermo-electrical model and new estimates for the expected dose in the LHC, the expected lifetime of the by-pass diodes has been estimated for various positions in the LHC arcs. It turns out that for all of the by-pass diodes across the arc elements the radiation resistance is largely sufficient. In the dispersion suppresser regions of the LHC, on a few diodes annual annealing during the shut down of the LHC must be applied or those diodes may need to be replaced after some time.

  13. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  14. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  15. Model of analysis of maximum loads in wind generators produced by extreme winds

    International Nuclear Information System (INIS)

    Herrera – Sánchez, Omar; Schellong, Wolfgang; González – Fernández, Vladimir

    2010-01-01

    The use of the wind energy by means of the wind turbines in areas of high risk of occurrence of Hurricanes comes being an important challenge for the designers of wind farm at world for some years. The wind generator is not usually designed to support this type of phenomena, for this reason the areas of high incidence of tropical hurricanes of the planning are excluded, that which, in occasions disables the use of this renewable source of energy totally, either because the country is very small, or because it coincides the area of more potential fully with that of high risk. To counteract this situation, a model of analysis of maxims loads has been elaborated taken place the extreme winds in wind turbines of great behavior. This model has the advantage of determining, in a chosen place, for the installation of a wind farm, the micro-areas with higher risk of wind loads above the acceptable for the standard classes of wind turbines. (author)

  16. Determing and monitoring of maximum permissible power for HWRR-3

    International Nuclear Information System (INIS)

    Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen

    1987-01-01

    The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability

  17. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    Science.gov (United States)

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-02

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.

  18. High-resolution melting-curve analysis of ligation-mediated real-time PCR for rapid evaluation of an epidemiological outbreak of extended-spectrum-beta-lactamase-producing Escherichia coli.

    Science.gov (United States)

    Woksepp, Hanna; Jernberg, Cecilia; Tärnberg, Maria; Ryberg, Anna; Brolund, Alma; Nordvall, Michaela; Olsson-Liljequist, Barbro; Wisell, Karin Tegmark; Monstein, Hans-Jürg; Nilsson, Lennart E; Schön, Thomas

    2011-12-01

    Methods for the confirmation of nosocomial outbreaks of bacterial pathogens are complex, expensive, and time-consuming. Recently, a method based on ligation-mediated PCR (LM/PCR) using a low denaturation temperature which produces specific melting-profile patterns of DNA products has been described. Our objective was to further develop this method for real-time PCR and high-resolution melting analysis (HRM) in a single-tube system optimized in order to achieve results within 1 day. Following the optimization of LM/PCR for real-time PCR and HRM (LM/HRM), the method was applied for a nosocomial outbreak of extended-spectrum-beta-lactamase (ESBL)-producing and ST131-associated Escherichia coli isolates (n = 15) and control isolates (n = 29), including four previous clusters. The results from LM/HRM were compared to results from pulsed-field gel electrophoresis (PFGE), which served as the gold standard. All isolates from the nosocomial outbreak clustered by LM/HRM, which was confirmed by gel electrophoresis of the LM/PCR products and PFGE. Control isolates that clustered by LM/PCR (n = 4) but not by PFGE were resolved by confirmatory gel electrophoresis. We conclude that LM/HRM is a rapid method for the detection of nosocomial outbreaks of bacterial infections caused by ESBL-producing E. coli strains. It allows the analysis of isolates in a single-tube system within a day, and the discriminatory power is comparable to that of PFGE.

  19. High-Resolution Melting-Curve Analysis of Ligation-Mediated Real-Time PCR for Rapid Evaluation of an Epidemiological Outbreak of Extended-Spectrum-Beta-Lactamase-Producing Escherichia coli ▿

    Science.gov (United States)

    Woksepp, Hanna; Jernberg, Cecilia; Tärnberg, Maria; Ryberg, Anna; Brolund, Alma; Nordvall, Michaela; Olsson-Liljequist, Barbro; Wisell, Karin Tegmark; Monstein, Hans-Jürg; Nilsson, Lennart E.; Schön, Thomas

    2011-01-01

    Methods for the confirmation of nosocomial outbreaks of bacterial pathogens are complex, expensive, and time-consuming. Recently, a method based on ligation-mediated PCR (LM/PCR) using a low denaturation temperature which produces specific melting-profile patterns of DNA products has been described. Our objective was to further develop this method for real-time PCR and high-resolution melting analysis (HRM) in a single-tube system optimized in order to achieve results within 1 day. Following the optimization of LM/PCR for real-time PCR and HRM (LM/HRM), the method was applied for a nosocomial outbreak of extended-spectrum-beta-lactamase (ESBL)-producing and ST131-associated Escherichia coli isolates (n = 15) and control isolates (n = 29), including four previous clusters. The results from LM/HRM were compared to results from pulsed-field gel electrophoresis (PFGE), which served as the gold standard. All isolates from the nosocomial outbreak clustered by LM/HRM, which was confirmed by gel electrophoresis of the LM/PCR products and PFGE. Control isolates that clustered by LM/PCR (n = 4) but not by PFGE were resolved by confirmatory gel electrophoresis. We conclude that LM/HRM is a rapid method for the detection of nosocomial outbreaks of bacterial infections caused by ESBL-producing E. coli strains. It allows the analysis of isolates in a single-tube system within a day, and the discriminatory power is comparable to that of PFGE. PMID:21956981

  20. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  1. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  2. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  3. Aplicação da técnica de emissão em tempo máximo de fonação em paciente com disfonia espasmódica adutora: relato de caso Applying the technique of sustained maximum phonation time in a female patient with adductor spasmodic dysphonia: case report

    Directory of Open Access Journals (Sweden)

    Luiz Alberto Alves Mota

    2012-01-01

    and strained-strangled voice. The aim of this study was to describe the vocal, acoustic and laryngeal parameters measured for a female patient with ADS pre and post speech therapy using the Technique of Sustained Maximum Phonation Time (SMPT. This technique aims to promote increase in glottal resistance, improve phonatory stability, and enhance glottal coaptation. A 66-year-old female patient with ASD took part in this study. She was submitted to otorhinolaryngologic and speech-language assessment before and after the application of the SMPT technique. The results showed modification of vocal, acoustic and laryngeal parameters, such as re-classifying her dysphonia from G3R1B1A0S3I3 to G2R1B1A0S2I2, her pitch from severe to adequate, her spectrographic trace from unstable to more stable, and an expressive increase in mean fundamental frequency and mean vocal intensity, besides improvement of her glottal efficiency, with closure of the anteroposterior glottal opening. Speech therapy using the SMPT technique was considered a suitable treatment option for this case, given the good results obtained, especially the improvements in vocal quality and phonatory stability. The importance of further studies with the aim to provide greater scientific evidence for the effectiveness of the technique when treating ASD is emphasized.

  4. Maximum heat flux in boiling in a large volume

    International Nuclear Information System (INIS)

    Bergmans, Dzh.

    1976-01-01

    Relationships are derived for the maximum heat flux qsub(max) without basing on the assumptions of both the critical vapor velocity corresponding to the zero growth rate, and planar interface. The Helmholz nonstability analysis of vapor column has been made to this end. The results of this examination have been used to find maximum heat flux for spherical, cylindric and flat plate heaters. The conventional hydrodynamic theory was found to be incapable of producing a satisfactory explanation of qsub(max) for small heaters. The occurrence of qsub(max) in the present case can be explained by inadequate removal of vapor output from the heater (the force of gravity for cylindrical heaters and surface tension for the spherical ones). In case of flat plate heater the qsub(max) value can be explained with the help of the hydrodynamic theory

  5. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  6. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  7. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  8. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  9. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  10. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  11. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  12. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  13. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  14. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  15. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  16. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  17. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  18. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  19. Direct observation of an isopolyhalomethane O-H insertion reaction with water: Picosecond time-resolved resonance Raman (ps-TR3) study of the isobromoform reaction with water to produce a CHBr2OH product

    International Nuclear Information System (INIS)

    Kwok, W.M.; Zhao Cunyuan; Li Yunliang; Guan Xiangguo; Phillips, David Lee

    2004-01-01

    Picosecond time-resolved resonance Raman (ps-TR 3 ) spectroscopy was used to obtain the first definitive spectroscopic observation of an isopolyhalomethane O-H insertion reaction with water. The ps-TR 3 spectra show that isobromoform is produced within several picoseconds after photolysis of CHBr 3 and then reacts on the hundreds of picosecond time scale with water to produce a CHBr 2 OH reaction product. Photolysis of low concentrations of bromoform in aqueous solution resulted in noticeable formation of HBr strong acid. Ab initio calculations show that isobromoform can react with water to produce a CHBr 2 (OH) O-H insertion reaction product and a HBr leaving group. This is consistent with both the ps-TR 3 experiments that observe the reaction of isobromoform with water to form a CHBr 2 (OH) product and photolysis experiments that show HBr acid formation. We briefly discuss the implications of these results for the phase dependent behavior of polyhalomethane photochemistry in the gas phase versus water solvated environments

  20. Detection of the clostridial hydrogenase gene activity as a bio-index in a molasses wastewater bio-hydrogen producing system by real time PCR and FISH/ flow cytometry

    International Nuclear Information System (INIS)

    Jui-Jen Chang; Ping-Chi Hsu; Chi-Wa Choi; Sian-Jhong Yu; Cheng-Yu Ho; Wei-En Chen; Jiunn-Jyi Lay; Chieh-Chen Huang; Fu-Shyan Wen

    2006-01-01

    Hydrogenase is a key enzyme that is used by obligate, anaerobic clostridial to produce hydrogen. In this study a fermentative system with molasses wastewater as nutrient was used to produce hydrogen. For establishing the relationship between the vicissitude of clostridial hydrogenase gene activity and the hydrogen production of this system during the culturing period, total cellular RNA isolated at different growing stages were subjected to real time PCR using primer pair, which were designed according to the conserved sequence of clostridial hydrogenase genes. Cell samples at corresponding growing stages were subjected to in situ reverse transcriptase polymerase chain reaction (in situ RT-PCR) using the same primers and then to fluorescence in situ hybridization (FISH) using clostridial hydrogenase gene-specific DNA probe. Those clostridial cells expressed hydrogenase gene activity could be detected by fluorescence microscopy. This is the first time hydrogen-producing activity in a mixed culture could be successfully studied by means of FISH of hydrogenase mRNA. Besides, 16S rDNA was amplified from total cellular DNA analyzed by denaturing gradient gel electrophoresis (DGGE) to reveal the bacterial diversity in the fermentative system; FISH and flow cytometry aiming at 16S rRNA were also carried out to calculate the population of clostridia and total eubacteria in the system. (authors)

  1. Calculation on maximum accumulation of Pu-239 and Pu-241 from aqueous homogeneous reactor

    International Nuclear Information System (INIS)

    Ikhlas H Siregar; Frida Agung R; Suharyana; Azizul Khakim; Dahman Siregar

    2016-01-01

    Calculations on maximum accumulation of Pu-239 and Pu-241 using MCNPX computer code with UO_2(NO_3)_2 fuel solution enriched by 19.75% operating at temperature 80°C have been conducted. AHR design was simulated with cylindrical core having diameter of 63.4 cm and 122 cm high. From this geometry we found the reactor was critical with density 108 gr U/L of UO_2(NO_3)_2 solution. The result showed that multiplication factor (k_e_f_f) of AHR was 1.05284. Then the burn up calculations were done for various time intervals from 5 days until 285 years to analyze the result. From calculation, it was found out that the saturated concentration of Pu-239 was reached after 40-50 years of operation, producing 1.23 x 102 gr and the activity 7.645 Ci. While for operate time of AHR to produce Pu-241 should under 80 years with mass 21.7 gr and the activity 2.247 x 103 Ci. The accumulations of both isotopes are considered to be small, having low potential for misusing them for producing nuclear weapon. (author)

  2. Producers give prices a boost

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Uranium producers came alive in August, helping spot prices crack the $8.00 barrier for the first time since March. The upper end of NUKEM's price range actually finished the month at $8.20. Scrambling to fulfill their long-term delivery contracts, producers dominate the market. In the span of three weeks, five producers came out for 2 million lbs U3O8, ultimately buying nearly 1.5 million lbs. One producer accounted for over half this volume. The major factor behind rising prices was that producers required specific origins to meet contract obligations. Buyers willing to accept open origins created the lower end of NUKEM's price range

  3. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  4. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  5. On the maximum of wave surface of sea waves

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1980-01-01

    This article considers wave surface as a normal stationary random process to solve the estimation of the maximum of wave surface in a given time interval by means of the theoretical results of probability theory. The results are represented by formulas (13) to (19) in this article. It was proved in this article that when time interval approaches infinite, the formulas (3), (6) of E )eta max) that were derived from the references (Cartwright, Longuet-Higgins) can also be derived by asymptotic distribution of the maximum of wave surface provided by the article. The advantage of the results obtained from this point of view as compared with the results obtained from the references was discussed.

  6. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  7. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  8. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  9. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  10. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  11. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  12. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  13. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  14. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  15. MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.

  16. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  17. An Efficient Algorithm for the Maximum Distance Problem

    Directory of Open Access Journals (Sweden)

    Gabrielle Assunta Grün

    2001-12-01

    Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.

  18. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Directory of Open Access Journals (Sweden)

    Mroczka Janusz

    2014-12-01

    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  19. Paralisia unilateral de prega vocal: associação e correlação entre tempos máximos de fonação, posição e ângulo de afastamento Unilateral vocal fold paralysis: association and correlation between maximum phonation time, position and displacement angle

    Directory of Open Access Journals (Sweden)

    Luciane M. Steffen

    2004-08-01

    Full Text Available A paralisia de prega vocal (PPV decorre da lesão do nervo vago ou de seus ramos, podendo levar a alterações das funções que requerem o fechamento glótico. O tempo máximo de fonação (TMF é um teste aplicado rotineiramente em pacientes disfônicospara avaliar a eficiência glótica e freqüentemente utilizado em casos de PPV, cujos valores encontram-se diminuídos. A classificação clínica clássica da posição da prega vocal paralisada em mediana, para-mediana, intermediária e em abdução ou cadavérica tem sido objeto de controvérsias. OBJETIVO: Verificar a associação e correlação entre os TMF e posição da prega vocal paralisada (PVP, TMF e ângulo de afastamento da PVP, medir o ângulo de afastamento da linha média das diferentes posições da PVP e correlacioná-lo com a sua classificação clínica FORMA DE ESTUDO: Clínico retrospectivo. MATERIAL E MÉTODO: Foram revisados os prontuários e analisados os exames videoendoscópicos de 86 indivíduos com paralisia de prega vocal unilateral e medido o ângulo de afastamento da PVP por meio de um programa computadorizado. RESULTADOS: A associação e correlação entre os TMF em cada posição assumida pela PVP têm significância estatística somente para /z/ na posição mediana. A associação e correlação entre TMF com ângulo de afastamento da PVP guardam relação para /i/, /u/. Ao associar e correlacionar medidas de ângulo com posição observa-se significância estatística em posição de abdução. CONCLUSÕES: Neste estudo não foi possível determinar as posições assumidas pela PVP por meio dos TMF nem correlacioná-las com medidas do ângulo.Vocal fold paralysis (VFP is due to an injury of the vagus nerve or one of its branches and may cause dysfunctions in the glottic competence. The Maximum Phonation Time (MPT is a test usually applied on dysphonic patients to assess glottic efficiency, mainly in patients with VFP and a decreased phonation time. The

  20. Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon

    DEFF Research Database (Denmark)

    Fischer, Paul

    1997-01-01

    This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...

  1. A peptidomic approach for monitoring and characterising peptide cyanotoxins produced in Italian lakes by matrix-assisted laser desorption/ionisation and quadrupole time-of-flight mass spectrometry.

    Science.gov (United States)

    Ferranti, Pasquale; Nasi, Antonella; Bruno, Milena; Basile, Adriana; Serpe, Luigi; Gallo, Pasquale

    2011-05-15

    In recent years, the occurrence of cyanobacterial blooms in eutrophic freshwaters has been described all over the world, including most European countries. Blooms of cyanobacteria may produce mixtures of toxic secondary metabolites, called cyanotoxins. Among these, the most studied are microcystins, a group of cyclic heptapeptides, because of their potent hepatotoxicity and activity as tumour promoters. Other peptide cyanotoxins have been described whose structure and toxicity have not been thoroughly studied. Herein we present a peptidomic approach aimed to characterise and quantify the peptide cyanotoxins produced in two Italian lakes, Averno and Albano. The procedure was based on matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry mass spectrometry (MALDI-TOF-MS) analysis for rapid detection and profiling of the peptide mixture complexity, combined with liquid chromatography/electrospray ionisation quadrupole time-of- flight tandem mass spectrometry (LC/ESI-Q-TOF-MS/MS) which provided unambiguous structural identification of the main compounds, as well as accurate quantitative analysis of microcystins. In the case of Lake Averno, a novel variant of microcystin-RR and two novel anabaenopeptin variants (Anabaenopeptins B(1) and Anabaenopeptin F(1)), presenting homoarginine in place of the commonly found arginine, were detected and characterised. In Lake Albano, the peculiar peptide patterns in different years were compared, as an example of the potentiality of the peptidomic approach for fast screening analysis, prior to fine structural analysis and determination of cyanotoxins, which included six novel aeruginosin variants. This approach allows for wide range monitoring of cyanobacteria blooms, and to collect data for evaluating possible health risks to consumers, through the panel of the compounds produced along different years. Copyright © 2011 John Wiley & Sons, Ltd.

  2. County-Level Climate Uncertainty for Risk Assessments: Volume 4 Appendix C - Historical Maximum Near-Surface Air Temperature.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.

  3. County-Level Climate Uncertainty for Risk Assessments: Volume 18 Appendix Q - Historical Maximum Near-Surface Wind Speed.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconom ic impacts. The full report is contained in 27 volumes.

  4. The first occurence of a pleistocenic coral along the Brazilian coast - Age dating of the maximum of the penultimate transgression

    International Nuclear Information System (INIS)

    Martin, L.; Bittencourt, A.C.S.P.; Silva Vilas Boas, G. da

    1982-01-01

    Age dating work on a coral from Olivenca, Bahia, Brazil, has disclosed the first occurrence of a pleistocenic coral along the Brazilian coast. This coral has its top at the present high tide level and is covered by a series of beach-ridges formed after the maximum of the penultimate transgression that rose above present sea level. Five determinations by the Ionium ( 230 Th)/Uranium method produced ages ranging from 116.000 to 142.000 years B.P., indicating that maximum in the area to have taken place 120.000-125.000 years B.P., consistent with its documentation in other parts of the world. At that time, mean sea level was 8 + - 2 m above the present. (Author) [pt

  5. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  7. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  8. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  9. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  10. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  11. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  12. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  13. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  14. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  15. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  16. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  17. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  18. Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf

    2012-01-01

    (n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ time algorithm to d...... time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane....

  19. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  20. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  1. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  2. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  3. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  4. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  5. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  6. Fuzzy Controller Design Using FPGA for Photovoltaic Maximum Power Point Tracking

    OpenAIRE

    Basil M Hamed; Mohammed S. El-Moghany

    2012-01-01

    The cell has optimum operating point to be able to get maximum power. To obtain Maximum Power from photovoltaic array, photovoltaic power system usually requires Maximum Power Point Tracking (MPPT) controller. This paper provides a small power photovoltaic control system based on fuzzy control with FPGA technology design and implementation for MPPT. The system composed of photovoltaic module, buck converter and the fuzzy logic controller implemented on FPGA for controlling on/off time of MOSF...

  7. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  8. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  9. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  10. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  11. Review of probable maximum flood definition at B.C. Hydro

    International Nuclear Information System (INIS)

    Keenhan, P.T.; Kroeker, M.G.; Neudorf, P.A.

    1991-01-01

    Probable maximum floods (PMF) have been derived for British Columbia Hydro structures since design of the W.C. Bennet Dam in 1965. A dam safety program for estimating PMF for structures designed before that time has been ongoing since 1979. The program, which has resulted in rehabilitative measures at dams not meeting current established standards, is now being directed at the more recently constructed larger structures on the Peace and Columbia rivers. Since 1965 detailed studies have produced 23 probable maximum precipitation (PMP) and 24 PMF estimates. What defines a PMF in British Columbia in terms of an appropriate combination of meteorological conditions varies due to basin size and the climatic effect of mountain barriers. PMP is estimated using three methods: storm maximization and transposition, orographic separation method, and modification of non-orographic PMP for orography. Details of, and problems encountered with, these methods are discussed. Tools or methods to assess meterological limits for antecedant conditions and for limits to runoff during extreme events have not been developed and require research effort. 11 refs., 2 figs., 3 tabs

  12. Space and time resolved spectroscopy of laser-produced plasmas: A study of density-sensitive x-ray transitions in helium-like and neon-like ions

    International Nuclear Information System (INIS)

    Young, Bruce Kai Fong.

    1988-09-01

    The determination of level populations and detailed population mechanisms in dense plasmas has become an increasingly important problem in atomic physics. In this work, the density variation of line intensities and level populations in aluminum K-shell and molybdenum and silver L-shell emission spectra have been measured from high-powered, laser-produced plasmas. For each case, the density dependence of the observed line emission is due to the effect of high frequency electron-ion collisions on metastable levels. The density dependent line intensities vary greatly in laser-produced plasmas and can be used to extract detailed information concerning the population kinetics and level populations of the ions. The laser-plasmas had to be fully characterized in order to clearly compare the observed density dependence with atomic theory predictions. This has been achieved through the combined use of new diagnostic instruments and microdot targets which provided simultaneously space, time, and spectrally resolved data. The plasma temperatures were determined from the slope of the hydrogen-like recombination continuum. The time resolved electron density profiles were measured using multiple frame holographic interferometry. Thus, the density dependence of K-shell spectral lines could be clearly examined, independent of assumptions concerning the dynamics of the plasma. In aluminum, the electron density dependence of various helium-like line intensity ratios were measured. Standard collisional radiative equilibrium models fail to account for the observed density dependence measured for the ''He/sub α//IC'' ratio. Instead, a quasi-steady state atomic model based on a purely recombining plasma is shown to accurately predict the measured density dependence. This same recombining plasma calculation successfully models the density dependence of the high-n ''He/sub γ//He/sub β/'' and ''He/sub δ//He/sub β/'' helium-like resonance line intensity ratios

  13. Space and time resolved spectroscopy of laser-produced plasmas: A study of density-sensitive x-ray transitions in helium-like and neon-like ions

    Energy Technology Data Exchange (ETDEWEB)

    Young, Bruce Kai Fong

    1988-09-01

    The determination of level populations and detailed population mechanisms in dense plasmas has become an increasingly important problem in atomic physics. In this work, the density variation of line intensities and level populations in aluminum K-shell and molybdenum and silver L-shell emission spectra have been measured from high-powered, laser-produced plasmas. For each case, the density dependence of the observed line emission is due to the effect of high frequency electron-ion collisions on metastable levels. The density dependent line intensities vary greatly in laser-produced plasmas and can be used to extract detailed information concerning the population kinetics and level populations of the ions. The laser-plasmas had to be fully characterized in order to clearly compare the observed density dependence with atomic theory predictions. This has been achieved through the combined use of new diagnostic instruments and microdot targets which provided simultaneously space, time, and spectrally resolved data. The plasma temperatures were determined from the slope of the hydrogen-like recombination continuum. The time resolved electron density profiles were measured using multiple frame holographic interferometry. Thus, the density dependence of K-shell spectral lines could be clearly examined, independent of assumptions concerning the dynamics of the plasma. In aluminum, the electron density dependence of various helium-like line intensity ratios were measured. Standard collisional radiative equilibrium models fail to account for the observed density dependence measured for the ''He/sub ..cap alpha..//IC'' ratio. Instead, a quasi-steady state atomic model based on a purely recombining plasma is shown to accurately predict the measured density dependence. This same recombining plasma calculation successfully models the density dependence of the high-n ''He/sub ..gamma..//He/sub ..beta../'' and ''He/sub delta

  14. Força muscular respiratória, postura corporal, intensidade vocal e tempos máximos de fonação na Doença de Parkinson Respiratory muscle strength, body posture, vocal intensity and maximum phonation times in Parkinson Disease

    Directory of Open Access Journals (Sweden)

    Fernanda Vargas Ferreira

    2012-04-01

    Full Text Available TEMA: Verificar os achados de força muscular respiratória (FMR, postura corporal (PC, intensidade vocal (IV e tempos máximos de fonação (TMF, em indivíduos com Doença de Parkinson (DP e casos de controle, conforme o sexo, o estágio da DP e o nível de atividade física (AF. PROCEDIMENTOS: três homens e duas mulheres com DP, entre 36 e 63 anos (casos de estudo - CE, e cinco indivíduos sem doenças neurológicas, pareados em idade, sexo e nível de AF (casos de controle - CC. Avaliadas a FMR, PC, IV e TMF. RESULTADOS: homens: diminuição mais acentuada dos TMF, IV e FMR nos parkinsonianos, mais alterações posturais nos idosos; mulheres com e sem DP: alterações posturais similares, relação positiva entre estágio, nível de AF e as demais medidas. CONCLUSÕES: Verificou-se nas parkinsonianas, prejuízo na IV e nos parkinsonianos déficits nos TMF, IV e FMR. Sugerem-se novos estudos sob um viés interdisciplinar.PURPOSE: To check the findings on respiratory muscular strength (RMS, body posture (BP, vocal intensity (VI and maximum phonation time (MPT, in patients with Parkinson Disease (PD and control cases, according to gender, Parkinson Disease stage (PD and the level of physical activity (PA. METHODS: three men and two women with PD, between 36 and 63 year old (study cases - SC, and five subjects without neurologic diseases, of the same age, gender and PA level (control cases - CC. We evaluated RMS, BP, VI and MPT. RESULTS: men: a more pronounced decrease of MPT, VI, RMS in Parkinson patients, plus postural alterations in the elderly; women: similar postural alterations, positive relation between stages, PA level and the other measures. CONCLUSIONS: We observed in women with PD, impaired VI; in men with PD deficits in MPT, VI, RMS. We suggest further studies under an interdisciplinary bias.

  15. Study of forecasting maximum demand of electric power

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, B.C.; Hwang, Y.J. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    As far as the past performances of power supply and demand in Korea is concerned, one of the striking phenomena is that there have been repeated periodic surpluses and shortages of power generation facilities. Precise assumption and prediction of power demands is the basic work in establishing a supply plan and carrying out the right policy since facilities investment of the power generation industry requires a tremendous amount of capital and a long construction period. The purpose of this study is to study a model for the inference and prediction of a more precise maximum demand under these backgrounds. The non-parametric model considered in this study, paying attention to meteorological factors such as temperature and humidity, does not have a simple proportionate relationship with the maximum power demand, but affects it through mutual complicated nonlinear interaction. I used the non-parametric inference technique by introducing meteorological effects without importing any literal assumption on the interaction of temperature and humidity preliminarily. According to the analysis result, it is found that the non-parametric model that introduces the number of tropical nights which shows the continuity of the meteorological effect has better prediction power than the linear model. The non- parametric model that considers both the number of tropical nights and the number of cooling days at the same time is a model for predicting maximum demand. 7 refs., 6 figs., 9 tabs.

  16. Cases in which ancestral maximum likelihood will be confusingly misleading.

    Science.gov (United States)

    Handelman, Tomer; Chor, Benny

    2017-05-07

    Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.

  17. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    Science.gov (United States)

    Poljak, Nikola

    2016-01-01

    The problem of determining the angle ? at which a point mass launched from ground level with a given speed v[subscript 0] will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of ? = p/4, producing a maximum range of D[subscript max] = v[superscript…

  18. Maximum support resistance with steel arch backfilling

    Energy Technology Data Exchange (ETDEWEB)

    1983-01-01

    A system of backfilling for roadway arch supports to replace timber and debris lagging is described. Produced in West Germany, it is known as the Bullflex system and consists of 23 cm diameter woven textile tubing which is inflated with a pumpable hydraulically-setting filler of the type normally used in mines. The tube is placed between the back of the support units and the rock face and creates an early-stage interlocking effect.

  19. Decree of the 6 May 2017 defining the conditions of additional remuneration of electricity produced by electricity production installations using wind mechanical energy with a maximum of 6 wind turbines. Decree of the 9 May 2014 defining purchase and additional remuneration conditions for the electricity produced by installations using mainly biogas produced by methanization of matters resulting from urban or industrial waste water treatment. Decree of the 9 May 2017 defining purchase conditions for electricity produced by installations implanted on building and using photovoltaic solar energy, with an installed power less than or equal to 100 kilowatts as those concerned at the 3. of the article D.314-15 of the Code of Energy, and located in continental metropolitan territory

    International Nuclear Information System (INIS)

    Royal, Segolene; Sapin, Michel

    2017-01-01

    This document gathers three legal texts which respectively define and eventually give elements and methods of calculation of conditions of additional remuneration or purchase of electricity produced by limited wind energy installations, by biogas-based electricity production installations, and by photovoltaic installations mounted on buildings

  20. A simple maximum power point tracker for thermoelectric generators

    International Nuclear Information System (INIS)

    Paraskevas, Alexandros; Koutroulis, Eftichios

    2016-01-01

    Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.

  1. Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization

    KAUST Repository

    Terzariol, Marco

    2017-11-13

    The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

  2. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  3. Maximum permissible body burdens and maximum permissible concentrations of radionuclides in air and in water for occupational exposure. Recommendations of the National Committee on Radiation Protection. Handbook 69

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1959-06-05

    The present Handbook and its predecessors stem from the Second International Congress of Radiology, held in Stockholm in 1928. At that time, under the auspices of the Congress, the International Commission on Radiological Protection (ICRP) was organized to deal initially with problems of X-ray protection and later with radioactivity protection. At that time 'permissible' doses of X-rays were estimated primarily in terms of exposures which produced erythema, the amount of exposure which would produce a defined reddening of the skin. Obviously a critical problem in establishing criteria for radiation protection was one of developing useful standards and techniques of physical measurement. For this reason two of the organizations in this country with a major concern for X-ray protection, the American Roentgen Ray Society and the Radiology Society of North America, suggested that the National Bureau of Standards assume responsibility for organizing representative experts to deal with the problem. Accordingly, early in 1929, an Advisory Committee on X-ray and Radium Protection was organized to develop recommendations on the protection problem within the United States and to formulate United States points of view for presentation to the International Commission on Radiological Protection. The organization of the U.S. Advisory Committee included experts from both the medical and physical science fields. The recommendations of this Handbook take into consideration the NCRP statement entitled 'Maximum Permissible Radiation Exposures to Man', published as an addendum to Handbook 59 on April 15, 1958. As noted above this study was carried out jointly by the ICRP and the NCRP, and the complete report is more extensive than the material contained in this Handbook.

  4. Maximum permissible body burdens and maximum permissible concentrations of radionuclides in air and in water for occupational exposure. Recommendations of the National Committee on Radiation Protection. Handbook 69

    International Nuclear Information System (INIS)

    1959-01-01

    The present Handbook and its predecessors stem from the Second International Congress of Radiology, held in Stockholm in 1928. At that time, under the auspices of the Congress, the International Commission on Radiological Protection (ICRP) was organized to deal initially with problems of X-ray protection and later with radioactivity protection. At that time 'permissible' doses of X-rays were estimated primarily in terms of exposures which produced erythema, the amount of exposure which would produce a defined reddening of the skin. Obviously a critical problem in establishing criteria for radiation protection was one of developing useful standards and techniques of physical measurement. For this reason two of the organizations in this country with a major concern for X-ray protection, the American Roentgen Ray Society and the Radiology Society of North America, suggested that the National Bureau of Standards assume responsibility for organizing representative experts to deal with the problem. Accordingly, early in 1929, an Advisory Committee on X-ray and Radium Protection was organized to develop recommendations on the protection problem within the United States and to formulate United States points of view for presentation to the International Commission on Radiological Protection. The organization of the U.S. Advisory Committee included experts from both the medical and physical science fields. The recommendations of this Handbook take into consideration the NCRP statement entitled 'Maximum Permissible Radiation Exposures to Man', published as an addendum to Handbook 59 on April 15, 1958. As noted above this study was carried out jointly by the ICRP and the NCRP, and the complete report is more extensive than the material contained in this Handbook

  5. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  6. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  7. 20 CFR 10.806 - How are the maximum fees defined?

    Science.gov (United States)

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  8. Evaluation of a multiplex real-time PCR method for detecting shiga toxin-producing Escherichia coli in beef and comparison to the U.S. Department of Agriculture Food Safety and Inspection Service Microbiology laboratory guidebook method.

    Science.gov (United States)

    Fratamico, Pina M; Wasilenko, Jamie L; Garman, Bradley; Demarco, Daniel R; Varkey, Stephen; Jensen, Mark; Rhoden, Kyle; Tice, George

    2014-02-01

    The "top-six" non-O157 Shiga toxin-producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) most frequently associated with outbreaks and cases of foodborne illnesses have been declared as adulterants in beef by the U.S. Department of Agriculture Food Safety and Inspection Service (FSIS). Regulatory testing in beef began in June 2012. The purpose of this study was to evaluate the DuPont BAX System method for detecting these top six STEC strains and strains of E. coli O157:H7. For STEC, the BAX System real-time STEC suite was evaluated, including a screening assay for the stx and eae virulence genes and two panel assays to identify the target serogroups: panel 1 detects O26, O111, and O121, and panel 2 detects O45, O103, O145. For E. coli O157:H7, the BAX System real-time PCR assay for this specific serotype was used. Sensitivity of each assay for the PCR targets was ≥1.23 × 10(3) CFU/ml in pure culture. Each assay was 100% inclusive for the strains tested (20 to 50 per assay), and no cross-reactivity with closely related strains was observed in any of the assays. The performance of the BAX System methods was compared with that of the FSIS Microbiology Laboratory Guidebook (MLG) methods for detection of the top six STEC and E. coli O157:H7 strains in ground beef and beef trim. Generally, results of the BAX System method were similar to those of the MLG methods for detecting non-O157 STEC and E. coli O157:H7. Reducing or eliminating novobiocin in modified tryptic soy broth (mTSB) may improve the detection of STEC O111 strains; one beef trim sample inoculated with STEC O111 produced a negative result when enriched in mTSB with 8 mg/liter novobiocin but was positive when enriched in mTSB without novobiocin. The results of this study indicate the feasibility of deploying a panel of real-time PCR assay configurations for the detection and monitoring of the top six STEC and E. coli O157:H7 strains in beef. The approach could easily be adapted

  9. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  10. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  11. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  12. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  13. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  14. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  15. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  16. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  17. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  18. Individual Module Maximum Power Point Tracking for a Thermoelectric Generator Systems

    DEFF Research Database (Denmark)

    Vadstrup, Casper; Chen, Min; Schaltz, Erik

    Thermo Electric Generator (TEG) modules are often connected in a series and/or parallel system in order to match the TEG system voltage with the load voltage. However, in order to be able to control the power production of the TEG system a DC/DC converter is inserted between the TEG system...... and the load. The DC/DC converter is under the control of a Maximum Power Point Tracker (MPPT) which insures that the TEG system produces the maximum possible power to the load. However, if the conditions, e.g. temperature, health, etc., of the TEG modules are different each TEG module will not produce its...

  19. Maximum/minimum asymmetric rod detection

    International Nuclear Information System (INIS)

    Huston, J.T.

    1990-01-01

    This patent describes a system for determining the relative position of each control rod within a control rod group in a nuclear reactor. The control rod group having at least three control rods therein. It comprises: means for producing a signal representative of a position of each control rod within the control rod group in the nuclear reactor; means for establishing a signal representative of the highest position of a control rod in the control rod group in the nuclear reactor; means for establishing a signal representative of the lowest position of a control rod in the control rod group in the nuclear reactor; means for determining a difference between the signal representative of the position of the highest control rod and the signal representative of the position of the lowest control rod; means for establishing a predetermined limit for the difference between the signal representative of the position of the highest control rod and the signal representative of the position of the lowest control rod; and means for comparing the difference between the signals with the predetermined limit. The comparing means producing an output signal when the difference between the signals exceeds the predetermined limit

  20. Reconstructing phylogenetic networks using maximum parsimony.

    Science.gov (United States)

    Nakhleh, Luay; Jin, Guohua; Zhao, Fengmei; Mellor-Crummey, John

    2005-01-01

    Phylogenies - the evolutionary histories of groups of organisms - are one of the most widely used tools throughout the life sciences, as well as objects of research within systematics, evolutionary biology, epidemiology, etc. Almost every tool devised to date to reconstruct phylogenies produces trees; yet it is widely understood and accepted that trees oversimplify the evolutionary histories of many groups of organims, most prominently bacteria (because of horizontal gene transfer) and plants (because of hybrid speciation). Various methods and criteria have been introduced for phylogenetic tree reconstruction. Parsimony is one of the most widely used and studied criteria, and various accurate and efficient heuristics for reconstructing trees based on parsimony have been devised. Jotun Hein suggested a straightforward extension of the parsimony criterion to phylogenetic networks. In this paper we formalize this concept, and provide the first experimental study of the quality of parsimony as a criterion for constructing and evaluating phylogenetic networks. Our results show that, when extended to phylogenetic networks, the parsimony criterion produces promising results. In a great majority of the cases in our experiments, the parsimony criterion accurately predicts the numbers and placements of non-tree events.

  1. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  2. Catastrophic Disruption Threshold and Maximum Deflection from Kinetic Impact

    Science.gov (United States)

    Cheng, A. F.

    2017-12-01

    The use of a kinetic impactor to deflect an asteroid on a collision course with Earth was described in the NASA Near-Earth Object Survey and Deflection Analysis of Alternatives (2007) as the most mature approach for asteroid deflection and mitigation. The NASA DART mission will demonstrate asteroid deflection by kinetic impact at the Potentially Hazardous Asteroid 65803 Didymos in October, 2022. The kinetic impactor approach is considered to be applicable with warning times of 10 years or more and with hazardous asteroid diameters of 400 m or less. In principle, a larger kinetic impactor bringing greater kinetic energy could cause a larger deflection, but input of excessive kinetic energy will cause catastrophic disruption of the target, leaving possibly large fragments still on collision course with Earth. Thus the catastrophic disruption threshold limits the maximum deflection from a kinetic impactor. An often-cited rule of thumb states that the maximum deflection is 0.1 times the escape velocity before the target will be disrupted. It turns out this rule of thumb does not work well. A comparison to numerical simulation results shows that a similar rule applies in the gravity limit, for large targets more than 300 m, where the maximum deflection is roughly the escape velocity at momentum enhancement factor β=2. In the gravity limit, the rule of thumb corresponds to pure momentum coupling (μ=1/3), but simulations find a slightly different scaling μ=0.43. In the smaller target size range that kinetic impactors would apply to, the catastrophic disruption limit is strength-controlled. A DART-like impactor won't disrupt any target asteroid down to significantly smaller size than the 50 m below which a hazardous object would not penetrate the atmosphere in any case unless it is unusually strong.

  3. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  4. Adjusting for under-identification of Aboriginal and/or Torres Strait Islander births in time series produced from birth records: Using record linkage of survey data and administrative data sources

    Directory of Open Access Journals (Sweden)

    Lawrence David

    2012-07-01

    Full Text Available Abstract Background Statistical time series derived from administrative data sets form key indicators in measuring progress in addressing disadvantage in Aboriginal and Torres Strait Islander populations in Australia. However, inconsistencies in the reporting of Indigenous status can cause difficulties in producing reliable indicators. External data sources, such as survey data, provide a means of assessing the consistency of administrative data and may be used to adjust statistics based on administrative data sources. Methods We used record linkage between a large-scale survey (the Western Australian Aboriginal Child Health Survey, and two administrative data sources (the Western Australia (WA Register of Births and the WA Midwives’ Notification System to compare the degree of consistency in determining Indigenous status of children between the two sources. We then used a logistic regression model predicting probability of consistency between the two sources to estimate the probability of each record on the two administrative data sources being identified as being of Aboriginal and/or Torres Strait Islander origin in a survey. By summing these probabilities we produced model-adjusted time series of neonatal outcomes for Aboriginal and/or Torres Strait Islander births. Results Compared to survey data, information based only on the two administrative data sources identified substantially fewer Aboriginal and/or Torres Strait Islander births. However, these births were not randomly distributed. Births of children identified as being of Aboriginal and/or Torres Strait Islander origin in the survey only were more likely to be living in urban areas, in less disadvantaged areas, and to have only one parent who identifies as being of Aboriginal and/or Torres Strait Islander origin, particularly the father. They were also more likely to have better health and wellbeing outcomes. Applying an adjustment model based on the linked survey data increased

  5. [Evolutionary process unveiled by the maximum genetic diversity hypothesis].

    Science.gov (United States)

    Huang, Yi-Min; Xia, Meng-Ying; Huang, Shi

    2013-05-01

    As two major popular theories to explain evolutionary facts, the neutral theory and Neo-Darwinism, despite their proven virtues in certain areas, still fail to offer comprehensive explanations to such fundamental evolutionary phenomena as the genetic equidistance result, abundant overlap sites, increase in complexity over time, incomplete understanding of genetic diversity, and inconsistencies with fossil and archaeological records. Maximum genetic diversity hypothesis (MGD), however, constructs a more complete evolutionary genetics theory that incorporates all of the proven virtues of existing theories and adds to them the novel concept of a maximum or optimum limit on genetic distance or diversity. It has yet to meet a contradiction and explained for the first time the half-century old Genetic Equidistance phenomenon as well as most other major evolutionary facts. It provides practical and quantitative ways of studying complexity. Molecular interpretation using MGD-based methods reveal novel insights on the origins of humans and other primates that are consistent with fossil evidence and common sense, and reestablished the important role of China in the evolution of humans. MGD theory has also uncovered an important genetic mechanism in the construction of complex traits and the pathogenesis of complex diseases. We here made a series of sequence comparisons among yeasts, fishes and primates to illustrate the concept of limit on genetic distance. The idea of limit or optimum is in line with the yin-yang paradigm in the traditional Chinese view of the universal creative law in nature.

  6. Brookhaven Linac Isotope Producer

    Data.gov (United States)

    Federal Laboratory Consortium — The Brookhaven Linac Isoptope Producer (BLIP)—positioned at the forefront of research into radioisotopes used in cancer treatment and diagnosis—produces commercially...

  7. Considerations on the establishment of maximum permissible exposure of man

    International Nuclear Information System (INIS)

    Jacobi, W.

    1974-01-01

    An attempt is made in the information lecture to give a quantitative analysis of the somatic radiation risk and to illustrate a concept to fix dose limiting values. Of primary importance is the limiting values. Of primary importance is the limiting value of the radiation exposure to the whole population. By consequential application of the risk concept, the following points are considered: 1) Definition of the risk for radiation late damages (cancer, leukemia); 2) relationship between radiation dose and thus caused radiation risk; 3) radiation risk and the dose limiting values at the time; 4) criteria for the maximum acceptable radiation risk; 5) limiting value which can be expected at the time. (HP/LH) [de

  8. Effects of lag and maximum growth in contaminant transport and biodegradation modeling

    International Nuclear Information System (INIS)

    Wood, B.D.; Dawson, C.N.

    1992-06-01

    The effects of time lag and maximum microbial growth on biodegradation in contaminant transport are discussed. A mathematical model is formulated that accounts for these effects, and a numerical case study is presented that demonstrates how lag influences biodegradation

  9. The Effects of Data Gaps on the Calculated Monthly Mean Maximum and Minimum Temperatures in the Continental United States: A Spatial and Temporal Study.

    Science.gov (United States)

    Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.

    1999-05-01

    Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.

  10. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  11. Forming Student Online Teams for Maximum Performance

    Science.gov (United States)

    Olson, Joel D.; Ringhand, Darlene G.; Kalinski, Ray C.; Ziegler, James G.

    2015-01-01

    What is the best way to assign graduate business students to online team-based projects? Team assignments are frequently made on the basis of alphabet, time zones or previous performance. This study reviews personality as an indicator of student online team performance. The personality assessment IDE (Insights Discovery Evaluator) was administered…

  12. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  13. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  14. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  15. Producing charcoal from wastes

    Energy Technology Data Exchange (ETDEWEB)

    Pogorelov, V.A.

    1983-01-01

    Experimental works to use wood wastes for producing charcoal are examined, which are being conducted in the Sverdlovsk assembly and adjustment administration of Soyuzorglestekhmontazh. A wasteless prototype installation for producing fine charcoal is described, along with its subsequent briqueting, which is made on the basis of units which are series produced by the factories of the country. The installation includes subassemblies for preparing and drying the raw material and for producing the charcoal briquets. In the opinion of specialists, the charcoal produced from the wastes may be effectively used in ferrous and nonferrous metallurgy and in the production of pipes.

  16. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  17. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  18. The Maximum Flux of Star-Forming Galaxies

    Science.gov (United States)

    Crocker, Roland M.; Krumholz, Mark R.; Thompson, Todd A.; Clutterbuck, Julie

    2018-04-01

    The importance of radiation pressure feedback in galaxy formation has been extensively debated over the last decade. The regime of greatest uncertainty is in the most actively star-forming galaxies, where large dust columns can potentially produce a dust-reprocessed infrared radiation field with enough pressure to drive turbulence or eject material. Here we derive the conditions under which a self-gravitating, mixed gas-star disc can remain hydrostatic despite trapped radiation pressure. Consistently taking into account the self-gravity of the medium, the star- and dust-to-gas ratios, and the effects of turbulent motions not driven by radiation, we show that galaxies can achieve a maximum Eddington-limited star formation rate per unit area \\dot{Σ }_*,crit ˜ 10^3 M_{⊙} pc-2 Myr-1, corresponding to a critical flux of F*, crit ˜ 1013L⊙ kpc-2 similar to previous estimates; higher fluxes eject mass in bulk, halting further star formation. Conversely, we show that in galaxies below this limit, our one-dimensional models imply simple vertical hydrostatic equilibrium and that radiation pressure is ineffective at driving turbulence or ejecting matter. Because the vast majority of star-forming galaxies lie below the maximum limit for typical dust-to-gas ratios, we conclude that infrared radiation pressure is likely unimportant for all but the most extreme systems on galaxy-wide scales. Thus, while radiation pressure does not explain the Kennicutt-Schmidt relation, it does impose an upper truncation on it. Our predicted truncation is in good agreement with the highest observed gas and star formation rate surface densities found both locally and at high redshift.

  19. Maximum Path Information and Fokker Planck Equation

    Science.gov (United States)

    Li, Wei; Wang A., Q.; LeMehaute, A.

    2008-04-01

    We present a rigorous method to derive the nonlinear Fokker-Planck (FP) equation of anomalous diffusion directly from a generalization of the principle of least action of Maupertuis proposed by Wang [Chaos, Solitons & Fractals 23 (2005) 1253] for smooth or quasi-smooth irregular dynamics evolving in Markovian process. The FP equation obtained may take two different but equivalent forms. It was also found that the diffusion constant may depend on both q (the index of Tsallis entropy [J. Stat. Phys. 52 (1988) 479] and the time t.

  20. Coronary ligation reduces maximum sustained swimming speed in Chinook salmon, Oncorhynchus tshawytscha

    DEFF Research Database (Denmark)

    Farrell, A P; Steffensen, J F

    1987-01-01

    The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...

  1. Recurrence quantification analysis of extremes of maximum and minimum temperature patterns for different climate scenarios in the Mesochora catchment in Central-Western Greece

    Science.gov (United States)

    Panagoulia, Dionysia; Vlahogianni, Eleni I.

    2018-06-01

    A methodological framework based on nonlinear recurrence analysis is proposed to examine the historical data evolution of extremes of maximum and minimum daily mean areal temperature patterns over time under different climate scenarios. The methodology is based on both historical data and atmospheric General Circulation Model (GCM) produced climate scenarios for the periods 1961-2000 and 2061-2100 which correspond to 1 × CO2 and 2 × CO2 scenarios. Historical data were derived from the actual daily observations coupled with atmospheric circulation patterns (CPs). The dynamics of the temperature was reconstructed in the phase-space from the time series of temperatures. The statistically comparing different temperature patterns were based on some discriminating statistics obtained by the Recurrence Quantification Analysis (RQA). Moreover, the bootstrap method of Schinkel et al. (2009) was adopted to calculate the confidence bounds of RQA parameters based on a structural preserving resampling. The overall methodology was implemented to the mountainous Mesochora catchment in Central-Western Greece. The results reveal substantial similarities between the historical maximum and minimum daily mean areal temperature statistical patterns and their confidence bounds, as well as the maximum and minimum temperature patterns in evolution under the 2 × CO2 scenario. A significant variability and non-stationary behaviour characterizes all climate series analyzed. Fundamental differences are produced from the historical and maximum 1 × CO2 scenarios, the maximum 1 × CO2 and minimum 1 × CO2 scenarios, as well as the confidence bounds for the two CO2 scenarios. The 2 × CO2 scenario reflects the strongest shifts in intensity, duration and frequency in temperature patterns. Such transitions can help the scientists and policy makers to understand the effects of extreme temperature changes on water resources, economic development, and health of ecosystems and hence to proceed to

  2. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  3. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  4. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  5. Constraints on pulsar masses from the maximum observed glitch

    Science.gov (United States)

    Pizzochero, P. M.; Antonelli, M.; Haskell, B.; Seveso, S.

    2017-07-01

    Neutron stars are unique cosmic laboratories in which fundamental physics can be probed in extreme conditions not accessible to terrestrial experiments. In particular, the precise timing of rotating magnetized neutron stars (pulsars) reveals sudden jumps in rotational frequency in these otherwise steadily spinning-down objects. These 'glitches' are thought to be due to the presence of a superfluid component in the star, and offer a unique glimpse into the interior physics of neutron stars. In this paper we propose an innovative method to constrain the mass of glitching pulsars, using observations of the maximum glitch observed in a star, together with state-of-the-art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. We study the properties of a physically consistent angular momentum reservoir of pinned vorticity, and we find a general inverse relation between the size of the maximum glitch and the pulsar mass. We are then able to estimate the mass of all the observed glitchers that have displayed at least two large events. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the superfluid properties of dense hadronic matter in neutron star interiors.

  6. Maximum wind energy extraction strategies using power electronic converters

    Science.gov (United States)

    Wang, Quincy Qing

    2003-10-01

    This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through

  7. Optimal operating conditions for maximum biogas production in anaerobic bioreactors

    International Nuclear Information System (INIS)

    Balmant, W.; Oliveira, B.H.; Mitchell, D.A.; Vargas, J.V.C.; Ordonez, J.C.

    2014-01-01

    The objective of this paper is to demonstrate the existence of optimal residence time and substrate inlet mass flow rate for maximum methane production through numerical simulations performed with a general transient mathematical model of an anaerobic biodigester introduced in this study. It is herein suggested a simplified model with only the most important reaction steps which are carried out by a single type of microorganisms following Monod kinetics. The mathematical model was developed for a well mixed reactor (CSTR – Continuous Stirred-Tank Reactor), considering three main reaction steps: acidogenesis, with a μ max of 8.64 day −1 and a K S of 250 mg/L, acetogenesis, with a μ max of 2.64 day −1 and a K S of 32 mg/L, and methanogenesis, with a μ max of 1.392 day −1 and a K S of 100 mg/L. The yield coefficients were 0.1-g-dry-cells/g-pollymeric compound for acidogenesis, 0.1-g-dry-cells/g-propionic acid and 0.1-g-dry-cells/g-butyric acid for acetogenesis and 0.1 g-dry-cells/g-acetic acid for methanogenesis. The model describes both the transient and the steady-state regime for several different biodigester design and operating conditions. After model experimental validation, a parametric analysis was performed. It was found that biogas production is strongly dependent on the input polymeric substrate and fermentable monomer concentrations, but fairly independent of the input propionic, acetic and butyric acid concentrations. An optimisation study was then conducted and optimal residence time and substrate inlet mass flow rate were found for maximum methane production. The optima found were very sharp, showing a sudden drop of methane mass flow rate variation from the observed maximum to zero, within a 20% range around the optimal operating parameters, which stresses the importance of their identification, no matter how complex the actual bioreactor design may be. The model is therefore expected to be a useful tool for simulation, design, control and

  8. Power Producer Production Valuation

    Directory of Open Access Journals (Sweden)

    M. Kněžek

    2008-01-01

    Full Text Available The ongoing developments in the electricity market, in particular the establishment of the Prague Energy Exchange (PXE and the associated transfer from campaign-driven sale to continuous trading, represent a significant change for power companies.  Power producing companies can now optimize the sale of their production capacities with the objective of maximizing profit from wholesale electricity and supporting services. The Trading Departments measure the success rate of trading activities by the gross margin (GM, calculated by subtracting the realized sales prices from the realized purchase prices and the production cost, and indicate the profit & loss (P&L to be subsequently calculated by the Control Department. The risk management process is set up on the basis of a business strategy defining the volumes of electricity that have to be sold one year and one month before the commencement of delivery. At the same time, this process defines the volume of electricity to remain available for spot trading (trading limits. 

  9. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  10. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  11. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  12. Maximum permissible concentrations of uranium in air

    CERN Document Server

    Adams, N

    1973-01-01

    The retention of uranium by bone and kidney has been re-evaluated taking account of recently published data for a man who had been occupationally exposed to natural uranium aerosols and for adults who had ingested uranium at the normal dietary levels. For life-time occupational exposure to uranium aerosols the new retention functions yield a greater retention in bone and a smaller retention in kidney than the earlier ones, which were based on acute intakes of uranium by terminal patients. Hence bone replaces kidney as the critical organ. The (MPC) sub a for uranium 238 on radiological considerations using the current (1959) ICRP lung model for the new retention functions is slightly smaller than for earlier functions but the (MPC) sub a determined by chemical toxicity remains the most restrictive.

  13. Maximum neutron yeidls in experimental fusion devices

    International Nuclear Information System (INIS)

    Jassby, D.L.

    1979-02-01

    The optimal performances of 12 types of fusion devices are compared with regard to neutron production rate, neutrons per pulse, and fusion energy multiplication, Q/sub p/ (converted to the equivalent value in D-T operation). The record values in all categories are held by the beam-injected tokamak plasma, followed by other beam-target systems. The achieved values of Q/sub p/ for nearly all laboratory plasma fusion devices (magnetically or inertially confined) are found to roughly satisfy a common empirical scaling, Q/sub p/ approx. 10 -6 E/sub in//sup 3/2/, where E/sub in/ is the energy (in kilojoules) injected into the plasma during one or two energy confinement times, or the total energy delivered to the target for inertially confined systems. Fusion energy break-even (Q/sub p/ = 1) in any system apparently requires E/sub in/ approx. 10,000 kJ

  14. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  15. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  16. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  17. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  18. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  19. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    International Nuclear Information System (INIS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-01-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)

  20. Small scale wind energy harvesting with maximum power tracking

    Directory of Open Access Journals (Sweden)

    Joaquim Azevedo

    2015-07-01

    Full Text Available It is well-known that energy harvesting from wind can be used to power remote monitoring systems. There are several studies that use wind energy in small-scale systems, mainly with wind turbine vertical axis. However, there are very few studies with actual implementations of small wind turbines. This paper compares the performance of horizontal and vertical axis wind turbines for energy harvesting on wireless sensor network applications. The problem with the use of wind energy is that most of the time the wind speed is very low, especially at urban areas. Therefore, this work includes a study on the wind speed distribution in an urban environment and proposes a controller to maximize the energy transfer to the storage systems. The generated power is evaluated by simulation and experimentally for different load and wind conditions. The results demonstrate the increase in efficiency of wind generators that use maximum power transfer tracking, even at low wind speeds.

  1. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  2. Reduced oxygen at high altitude limits maximum size.

    Science.gov (United States)

    Peck, L S; Chapelle, G

    2003-11-07

    The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).

  3. A mini-exhibition with maximum content

    CERN Multimedia

    Laëtitia Pedroso

    2011-01-01

    The University of Budapest has been hosting a CERN mini-exhibition since 8 May. While smaller than the main travelling exhibition it has a number of major advantages: its compact design alleviates transport difficulties and makes it easier to find suitable venues in the Member States. Its content can be updated almost instantaneously and it will become even more interactive and high-tech as time goes by.   The exhibition on display in Budapest. The purpose of CERN's new mini-exhibition is to be more interactive and easier to install. Due to its size, the main travelling exhibition cannot be moved around quickly, which is why it stays in the same country for 4 to 6 months. But this means a long waiting list for the other Member States. To solve this problem, the Education Group has designed a new exhibition, which is smaller and thus easier to install. Smaller maybe, but no less rich in content, as the new exhibition conveys exactly the same messages as its larger counterpart. However, in the slimm...

  4. Tail Risk Constraints and Maximum Entropy

    Directory of Open Access Journals (Sweden)

    Donald Geman

    2015-06-01

    Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.

  5. Continuity and boundary conditions in thermodynamics: From Carnot's efficiency to efficiencies at maximum power

    Science.gov (United States)

    Ouerdane, H.; Apertet, Y.; Goupil, C.; Lecoeur, Ph.

    2015-07-01

    Classical equilibrium thermodynamics is a theory of principles, which was built from empirical knowledge and debates on the nature and the use of heat as a means to produce motive power. By the beginning of the 20th century, the principles of thermodynamics were summarized into the so-called four laws, which were, as it turns out, definitive negative answers to the doomed quests for perpetual motion machines. As a matter of fact, one result of Sadi Carnot's work was precisely that the heat-to-work conversion process is fundamentally limited; as such, it is considered as a first version of the second law of thermodynamics. Although it was derived from Carnot's unrealistic model, the upper bound on the thermodynamic conversion efficiency, known as the Carnot efficiency, became a paradigm as the next target after the failure of the perpetual motion ideal. In the 1950's, Jacques Yvon published a conference paper containing the necessary ingredients for a new class of models, and even a formula, not so different from that of Carnot's efficiency, which later would become the new efficiency reference. Yvon's first analysis of a model of engine producing power, connected to heat source and sink through heat exchangers, went fairly unnoticed for twenty years, until Frank Curzon and Boye Ahlborn published their pedagogical paper about the effect of finite heat transfer on output power limitation and their derivation of the efficiency at maximum power, now mostly known as the Curzon-Ahlborn (CA) efficiency. The notion of finite rate explicitly introduced time in thermodynamics, and its significance cannot be overlooked as shown by the wealth of works devoted to what is now known as finite-time thermodynamics since the end of the 1970's. The favorable comparison of the CA efficiency to actual values led many to consider it as a universal upper bound for real heat engines, but things are not so straightforward that a simple formula may account for a variety of situations. The

  6. Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate

    Science.gov (United States)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-11-01

    The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate, and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been fully investigated and thus differing PMP estimates are sometimes obtained without physics-based interpretations. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is modified and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The hybrid approach produced consistent historical PMP estimates as the traditional estimates. PMP in the PNW will increase by 50% ± 30% of the current design PMP by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus, long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.

  7. Biologically produced sulfur

    NARCIS (Netherlands)

    Kleinjan, W.E.; Keizer, de A.; Janssen, A.J.H.

    2003-01-01

    Sulfur compound oxidizing bacteria produce sulfur as an intermediate in the oxidation of hydrogen sulfide to sulfate. Sulfur produced by these microorganisms can be stored in sulfur globules, located either inside or outside the cell. Excreted sulfur globules are colloidal particles which are

  8. Consumers and Producers

    NARCIS (Netherlands)

    E. Maira (Elisa)

    2018-01-01

    markdownabstractIn the last few decades, advances in information and communication technology have dramatically changed the way consumers and producers interact in the marketplace. The Internet and social media have torn down the information barrier between producers and consumers, leading to

  9. Producers and oil markets

    International Nuclear Information System (INIS)

    Greaves, W.

    1993-01-01

    This article attempts an assessment of the potential use of futures by the Middle East oil producers. It focuses on Saudi Arabia since the sheer size of Saudi Arabian sales poses problems, but the basic issues discussed are similar for the other Middle East producers. (Author)

  10. Modelling non-stationary annual maximum flood heights in the lower Limpopo River basin of Mozambique

    Directory of Open Access Journals (Sweden)

    Daniel Maposa

    2016-05-01

    Full Text Available In this article we fit a time-dependent generalised extreme value (GEV distribution to annual maximum flood heights at three sites: Chokwe, Sicacate and Combomune in the lower Limpopo River basin of Mozambique. A GEV distribution is fitted to six annual maximum time series models at each site, namely: annual daily maximum (AM1, annual 2-day maximum (AM2, annual 5-day maximum (AM5, annual 7-day maximum (AM7, annual 10-day maximum (AM10 and annual 30-day maximum (AM30. Non-stationary time-dependent GEV models with a linear trend in location and scale parameters are considered in this study. The results show lack of sufficient evidence to indicate a linear trend in the location parameter at all three sites. On the other hand, the findings in this study reveal strong evidence of the existence of a linear trend in the scale parameter at Combomune and Sicacate, whilst the scale parameter had no significant linear trend at Chokwe. Further investigation in this study also reveals that the location parameter at Sicacate can be modelled by a nonlinear quadratic trend; however, the complexity of the overall model is not worthwhile in fit over a time-homogeneous model. This study shows the importance of extending the time-homogeneous GEV model to incorporate climate change factors such as trend in the lower Limpopo River basin, particularly in this era of global warming and a changing climate. Keywords: nonstationary extremes; annual maxima; lower Limpopo River; generalised extreme value

  11. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  12. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  13. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  14. Penalised Maximum Likelihood Simultaneous Longitudinal PET Image Reconstruction with Difference-Image Priors.

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J

    2018-04-26

    Many clinical contexts require the acquisition of multiple positron emission tomography (PET) scans of a single subject, for example to observe and quantify changes in functional behaviour in tumours after treatment in oncology. Typically, the datasets from each of these scans are reconstructed individually, without exploiting the similarities between them. We have recently shown that sharing information between longitudinal PET datasets by penalising voxel-wise differences during image reconstruction can improve reconstructed images by reducing background noise and increasing the contrast-to-noise ratio of high activity lesions. Here we present two additional novel longitudinal difference-image priors and evaluate their performance using 2D simulation studies and a 3D real dataset case study. We have previously proposed a simultaneous difference-image-based penalised maximum likelihood (PML) longitudinal image reconstruction method that encourages sparse difference images (DS-PML), and in this work we propose two further novel prior terms. The priors are designed to encourage longitudinal images with corresponding differences which have i) low entropy (DE-PML), and ii) high sparsity in their spatial gradients (DTV-PML). These two new priors and the originally proposed longitudinal prior were applied to 2D simulated treatment response [ 18 F]fluorodeoxyglucose (FDG) brain tumour datasets and compared to standard maximum likelihood expectation-maximisation (MLEM) reconstructions. These 2D simulation studies explored the effects of penalty strengths, tumour behaviour, and inter-scan coupling on reconstructed images. Finally, a real two-scan longitudinal data series acquired from a head and neck cancer patient was reconstructed with the proposed methods and the results compared to standard reconstruction methods. Using any of the three priors with an appropriate penalty strength produced images with noise levels equivalent to those seen when using standard

  15. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  16. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  17. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  18. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  19. Method of producing grouting mortar

    Energy Technology Data Exchange (ETDEWEB)

    Shelomov, I K; Alchina, S I; Dizer, E I; Gruzdeva, G A; Nikitinskii, V I; Sabirzyanov, A K

    1980-10-07

    A method of producing grouting mortar by mixing the cement with an aqueous salt solution is proposed. So as to increase the quality of the mortar through an acceleration of the time for hardening, the mixture is prepared in two stages, in the first of which 20-30% of the entire cement batch hardens, and in the second of which the remainder of the cement hardens; 1-3-% of an aqueous salt solution is used in quantities of 0.5/1 wt.-% of weight of the cement. The use of this method of producing grouting mortar helps to increase the flexural strength of the cement brick up to 50% after two days ageing by comparison with the strength of cement brick produced from grouting mortar by ordinary methods utilizing identical quantities of the initial components (cement, water, chloride).

  20. Future changes over the Himalayas: Maximum and minimum temperature

    Science.gov (United States)

    Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.

    2018-03-01

    An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with

  1. Agricultural Producer Certificates

    Data.gov (United States)

    Montgomery County of Maryland — A Certified Agricultural Producer, or representative thereof, is an individual who wishes to sell regionally-grown products in the public right-of-way. A Certified...

  2. Global correlations between maximum magnitudes of subduction zone interface thrust earthquakes and physical parameters of subduction zones

    NARCIS (Netherlands)

    Schellart, W. P.; Rawlinson, N.

    2013-01-01

    The maximum earthquake magnitude recorded for subduction zone plate boundaries varies considerably on Earth, with some subduction zone segments producing giant subduction zone thrust earthquakes (e.g. Chile, Alaska, Sumatra-Andaman, Japan) and others producing relatively small earthquakes (e.g.

  3. A Research on Maximum Symbolic Entropy from Intrinsic Mode Function and Its Application in Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhuofei Xu

    2017-01-01

    Full Text Available Empirical mode decomposition (EMD is a self-adaptive analysis method for nonlinear and nonstationary signals. It has been widely applied to machinery fault diagnosis and structural damage detection. A novel feature, maximum symbolic entropy of intrinsic mode function based on EMD, is proposed to enhance the ability of recognition of EMD in this paper. First, a signal is decomposed into a collection of intrinsic mode functions (IMFs based on the local characteristic time scale of the signal, and then IMFs are transformed into a serious of symbolic sequence with different parameters. Second, it can be found that the entropies of symbolic IMFs are quite different. However, there is always a maximum value for a certain symbolic IMF. Third, take the maximum symbolic entropy as features to describe IMFs from a signal. Finally, the proposed features are applied to evaluate the effect of maximum symbolic entropy in fault diagnosis of rolling bearing, and then the maximum symbolic entropy is compared with other standard time analysis features in a contrast experiment. Although maximum symbolic entropy is only a time domain feature, it can reveal the signal characteristic information accurately. It can also be used in other fields related to EMD method.

  4. Methods for producing diterpenes

    DEFF Research Database (Denmark)

    2015-01-01

    The present invention discloses that by combining different di TPS enzymes of class I and class II different diterpenes may be produced including diterpenes not identified in nature. Surprisingly it is revealed that a di TPS enzyme of class I of one species may be combined with a di TPS enzyme...... of class II from a different species, resulting in a high diversity of diterpenes, which can be produced....

  5. Polysaccharide-producing microalgae

    Energy Technology Data Exchange (ETDEWEB)

    Braud, J.P.; Chaumont, D.; Gudin, C.; Thepenier, C.; Chassin, P.; Lemaire, C.

    1982-11-01

    The production of extracellular polysaccharides is studied with Nostoc sp (cyanophycus), Porphiridium cruentum, Rhodosorus marinus, Rhodella maculata (rhodophyci) and Chlamydomonas mexicana (chlorophycus). The polysaccharides produced are separated by centrifugation of the culture then precipitation with alcohol. Their chemical structure was studied by infrared spectrometry and acid hydrolysis. By their rheological properties and especially their insensitivity to temperatrure and pH variations the polysaccharides produced by Porphryridium cruentum and Rhodella maculata appear as suitable candidates for industrial applications.

  6. A maximum power point tracking algorithm for photovoltaic applications

    Science.gov (United States)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  7. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  8. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  9. Producing liquid fuels from biomass

    Science.gov (United States)

    Solantausta, Yrjo; Gust, Steven

    The aim of this survey was to compare, on techno-economic criteria, alternatives of producing liquid fuels from indigenous raw materials in Finland. Another aim was to compare methods under development and prepare a proposal for steering research related to this field. Process concepts were prepared for a number of alternatives, as well as analogous balances and production and investment cost assessments for these balances. Carbon dioxide emissions of the alternatives and the price of CO2 reduction were also studied. All the alternatives for producing liquid fuels from indigenous raw materials are utmost unprofitable. There are great differences between the alternatives. While the production cost of ethanol is 6 to 9 times higher than the market value of the product, the equivalent ratio for substitute fuel oil produced from peat by pyrolysis is 3 to 4. However, it should be borne in mind that the technical uncertainties related to the alternatives are of different magnitude. Production of ethanol from barley is of commercial technology, while biomass pyrolysis is still under development. If the aim is to reach smaller carbon dioxide emissions by using liquid biofuels, the most favorable alternative is pyrolysis oil produced from wood. Fuels produced from cultivated biomass are more expensive ways of reducing CO2 emissions. Their potential of reducing CO2 emissions in Finland is insignificant. Integration of liquid fuel production to some other production line is more profitable.

  10. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  11. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  12. Radiation produced biomaterials

    International Nuclear Information System (INIS)

    Rosiak, J.M.

    1998-01-01

    Medical advances that have prolonged the average life span have generated increased need for new materials that can be used as tissue and organ replacements, drug delivery systems and/or components of devices related to therapy and diagnosis. The first man-made plastic used as surgical implant was celluloid, applied for cranial defect repair. However, the first users applied commercial materials with no regard for their purity, biostability and post-operative interaction with the organism. Thus, these materials evoked a strong tissue reaction and were unacceptable. The first polymer which gained acceptance for man-made plastic was poly(methyl methacrylate). But the first polymer of choice, precursor of the broad class of materials known today as hydrogels, was poly(hydroxyethyl methacrylate) synthesized in the fifties by Wichterle and Lim. HEMA and its various combinations with other, both hydrophilic and hydrophobic, polymers are till now the most often used hydrogels for medical purposes. In the early fifties, the pioneers of the radiation chemistry of polymers began some experiments with radiation crosslinking, also with hydrophilic polymers. However, hydrogels were analyzed mainly from the point of view of phenomena associated with mechanism of reactions, topology of network, and relations between radiation parameters of the processes. Fundamental monographs on radiation polymer physics and chemistry written by Charlesby (1960) and Chapiro (1962) proceed from this time. The noticeable interest in application of radiation to obtain hydrogels for biomedical purposes began in the late sixties as a result of the papers and patents published by Japanese and American scientists. Among others, the team of the Takasaki Radiation Chemistry Research Establishment headed by Kaetsu as well as Hoffman and his colleagues from the Center of Bioengineering, University of Washington have created the base for spreading interest in the field of biomaterials formed by means of

  13. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  14. 24 CFR 982.634 - Homeownership option: Maximum term of homeownership assistance.

    Science.gov (United States)

    2010-04-01

    ... VOUCHER PROGRAM Special Housing Types Homeownership Option § 982.634 Homeownership option: Maximum term of... unit during the time that homeownership payments are made; or (2) Is the spouse of any member of the household who has an ownership interest in the unit during the time homeownership payments are made. (c...

  15. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  16. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  17. Optimization of Culture Parameters for Maximum Polyhydroxybutyrate Production by Selected Bacterial Strains Isolated from Rhizospheric Soils.

    Science.gov (United States)

    Lathwal, Priyanka; Nehra, Kiran; Singh, Manpreet; Jamdagni, Pragati; Rana, Jogender S

    2015-01-01

    The enormous applications of conventional non-biodegradable plastics have led towards their increased usage and accumulation in the environment. This has become one of the major causes of global environmental concern in the present century. Polyhydroxybutyrate (PHB), a biodegradable plastic is known to have properties similar to conventional plastics, thus exhibiting a potential for replacing conventional non-degradable plastics. In the present study, a total of 303 different bacterial isolates were obtained from soil samples collected from the rhizospheric area of three crops, viz., wheat, mustard and sugarcane. All the isolates were screened for PHB (Poly-3-hydroxy butyric acid) production using Sudan Black staining method, and 194 isolates were found to be PHB positive. Based upon the amount of PHB produced, the isolates were divided into three categories: high, medium and low producers. Representative isolates from each category were selected for biochemical characterization; and for optimization of various culture parameters (carbon source, nitrogen source, C/N ratio, different pH, temperature and incubation time periods) for maximizing PHB accumulation. The highest PHB yield was obtained when the culture medium was supplemented with glucose as the carbon source, ammonium sulphate at a concentration of 1.0 g/l as the nitrogen source, and by maintaining the C/N ratio of the medium as 20:1. The physical growth parameters which supported maximum PHB accumulation included a pH of 7.0, and an incubation temperature of 30 degrees C for a period of 48 h. A few isolates exhibited high PHB accumulation under optimized conditions, thus showing a potential for their industrial exploitation.

  18. Maximum Power Point Tracking (MPPT Pada Sistem Pembangkit Listrik Tenaga Angin Menggunakan Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    Muhamad Otong

    2017-05-01

    Full Text Available In this paper, the implementation of the Maximum Power Point Tracking (MPPT technique is developed using buck-boost converter. Perturb and observe (P&O MPPT algorithm is used to searching maximum power from the wind power plant for charging of the battery. The model used in this study is the Variable Speed Wind Turbine (VSWT with a Permanent Magnet Synchronous Generator (PMSG. Analysis, design, and modeling of wind energy conversion system has done using MATLAB/simulink. The simulation results show that the proposed MPPT produce a higher output power than the system without MPPT. The average efficiency that can be achieved by the proposed system to transfer the maximum power into battery is 90.56%.

  19. Conditions for maximum isolation of stable condensate during separation in gas-condensate systems

    Energy Technology Data Exchange (ETDEWEB)

    Trivus, N.A.; Belkina, N.A.

    1969-02-01

    A thermodynamic analysis is made of the gas-liquid separation process in order to determine the relationship between conditions of maximum stable condensate separation and physico-chemical nature and composition of condensate. The analysis was made by considering the multicomponent gas-condensate fluid produced from Zyrya field as a ternary system, composed of methane, an intermediate component (propane and butane) and a heavy residue, C/sub 6+/. Composition of 5 ternary systems was calculated for a wide variation in separator conditions. At each separator pressure there is maximum condensate production at a certain temperature. This occurs because solubility of condensate components changes with temperature. Results of all calculations are shown graphically. The graphs show conditions of maximum stable condensate separation.

  20. Producing metallurgic coke

    Energy Technology Data Exchange (ETDEWEB)

    Abe, T.; Isida, K.; Vada, Y.

    1982-11-18

    A mixture of power producing coals with coal briquets of varying composition is proposed for coking in horizontal chamber furnaces. The briquets are produced from petroleum coke, coal fines or semicoke, which make up less than 27 percent of the mixture to be briquetted and coals with a standard coking output of volatile substances and coals with high maximal Gizeler fluidity. The ratio of these coals in the mixture is 0.6 to 2.1 or 18 to 32 percent, respectively. Noncaking or poorly caking coals are used as the power producing coals. The hardness of the obtained coke is DJ15-30 = 90.5 to 92.7 percent.