Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Efficiency at maximum power of thermally coupled heat engines.
Apertet, Y; Ouerdane, H; Goupil, C; Lecoeur, Ph
2012-04-01
We study the efficiency at maximum power of two coupled heat engines, using thermoelectric generators (TEGs) as engines. Assuming that the heat and electric charge fluxes in the TEGs are strongly coupled, we simulate numerically the dependence of the behavior of the global system on the electrical load resistance of each generator in order to obtain the working condition that permits maximization of the output power. It turns out that this condition is not unique. We derive a simple analytic expression giving the relation between the electrical load resistance of each generator permitting output power maximization. We then focus on the efficiency at maximum power (EMP) of the whole system to demonstrate that the Curzon-Ahlborn efficiency may not always be recovered: The EMP varies with the specific working conditions of each generator but remains in the range predicted by irreversible thermodynamics theory. We discuss our results in light of nonideal Carnot engine behavior.
Efficiency at Maximum Power of Low-Dissipation Carnot Engines
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered.
Efficiency at maximum power of low-dissipation Carnot engines.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-√Tc/Th] is recovered.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
AN EFFICIENT APPROXIMATE MAXIMUM LIKELIHOOD SIGNAL DETECTION FOR MIMO SYSTEMS
Cao Xuehong
2007-01-01
This paper proposes an efficient approximate Maximum Likelihood (ML) detection method for Multiple-Input Multiple-Output (MIMO) systems, which searches local area instead of exhaustive search and selects valid search points in each transmit antenna signal constellation instead of all hyperplane. Both of the selection and search complexity can be reduced significantly. The method performs the tradeoff between computational complexity and system performance by adjusting the neighborhood size to select the valid search points. Simulation results show that the performance is comparable to that of the ML detection while the complexity is only as the small fraction of ML.
On a robust and efficient maximum depth estimator
ZUO YiJun; LAI ShaoYong
2009-01-01
The best breakdown point robustness is one of the most outstanding features of the univariate median. For this robustness property, the median, however, has to pay the price of a low efficiency at normal and other light-tailed models. Affine equivariant multivariate analogues of the univariate median with high breakdown points were constructed in the past two decades. For the high breakdown robustness, most of them also have to sacrifice their efficiency at normal and other models,nevertheless. The affine equivariant maximum depth estimator proposed and studied in this paper turns out to be an exception. Like the univariate median, it also possesses a highest breakdown point among all its multivariate competitors. Unlike the univariate median, it is also highly efficient relative to the sample mean at normal and various other distributions, overcoming the vital low-efficiency shortcoming of the univariate and other multivariate generalized medians. The paper also studies the asymptotics of the estimator and establishes its limit distribution without symmetry and other strong assumptions that are typically imposed on the underlying distribution.
Efficiency at maximum power of a chemical engine.
Hooyberghs, Hans; Cleuren, Bart; Salazar, Alberto; Indekeu, Joseph O; Van den Broeck, Christian
2013-10-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power η(mp) [corrected] takes the form 1/2+cΔμ+O(Δμ(2)), with 1∕2 a universal constant and Δμ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in η(mp) [corrected] is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model, we obtain η(mp) = 1/(θ + 1) [corrected], with θ > 0 the power of Δμ in the transport equation.
Efficiency at maximum power of a chemical engine
Hooyberghs, Hans; Salazar, Alberto; Indekeu, Joseph O; Broeck, Christian Van den
2013-01-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power $\\eta$ takes the form 1/2+c\\Delta \\mu + O(\\Delta \\mu^2), with 1/2 a universal constant and $\\Delta \\mu$ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in $\\eta$ is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model we obtain \\eta = 1/(\\theta +1), with \\theta >0 the power of $\\Delta \\mu$ in the transport equation
Maximum efficiency of low-dissipation heat engines at arbitrary power
Holubec, Viktor; Ryabov, Artem
2016-07-01
We investigate maximum efficiency at a given power for low-dissipation heat engines. Close to maximum power, the maximum gain in efficiency scales as a square root of relative loss in power and this scaling is universal for a broad class of systems. For low-dissipation engines, we calculate the maximum gain in efficiency for an arbitrary fixed power. We show that engines working close to maximum power can operate at considerably larger efficiency compared to the efficiency at maximum power. Furthermore, we introduce universal bounds on maximum efficiency at a given power for low-dissipation heat engines. These bounds represent direct generalization of the bounds on efficiency at maximum power obtained by Esposito et al (2010 Phys. Rev. Lett. 105 150603). We derive the bounds analytically in the regime close to maximum power and for small power values. For the intermediate regime we present strong numerical evidence for the validity of the bounds.
Maximum herd efficiency in meat production I. Optima for slaughter ...
changes in product value are important, it is easy to join them to herd cost efficiency for ... should be evaluated in terms of total herd or life cycle effi- ciency, and not only for a ..... The decline of herd efficiency with increases in b in. Table 2 is in ...
Chen, Jincan; Yan, Zijun; Wu, Liqing
1996-06-01
Considering a thermoelectric generator as a heat engine cycle, the general differential equations of the temperature field inside thermoelectric elements are established by means of nonequilibrium thermodynamics. These equations are used to study the influence of heat leak, Joule's heat, and Thomson heat on the performance of the thermoelectric generator. New expressions are derived for the power output and the efficiency of the thermoelectric generator. The maximum power output is calculated and the optimal matching condition of load is determined. The maximum efficiency is discussed by a representative numerical example. The aim of this research is to provide some novel conclusions and redress some errors existing in a related investigation.
Maximum herd efficiency in meat production II. The influence of ...
efficiency involves reproduction and replacement rates, early fertility, and degree of fertility at first mating. .... For cattle and sheep, an estimate of the effect of early breeding ..... Genetic correlations among sex-limited traits in beef cattle. :. Anim.
Ouerdane, H.; Apertet, Y.; Goupil, C.; Lecoeur, Ph.
2015-07-01
Classical equilibrium thermodynamics is a theory of principles, which was built from empirical knowledge and debates on the nature and the use of heat as a means to produce motive power. By the beginning of the 20th century, the principles of thermodynamics were summarized into the so-called four laws, which were, as it turns out, definitive negative answers to the doomed quests for perpetual motion machines. As a matter of fact, one result of Sadi Carnot's work was precisely that the heat-to-work conversion process is fundamentally limited; as such, it is considered as a first version of the second law of thermodynamics. Although it was derived from Carnot's unrealistic model, the upper bound on the thermodynamic conversion efficiency, known as the Carnot efficiency, became a paradigm as the next target after the failure of the perpetual motion ideal. In the 1950's, Jacques Yvon published a conference paper containing the necessary ingredients for a new class of models, and even a formula, not so different from that of Carnot's efficiency, which later would become the new efficiency reference. Yvon's first analysis of a model of engine producing power, connected to heat source and sink through heat exchangers, went fairly unnoticed for twenty years, until Frank Curzon and Boye Ahlborn published their pedagogical paper about the effect of finite heat transfer on output power limitation and their derivation of the efficiency at maximum power, now mostly known as the Curzon-Ahlborn (CA) efficiency. The notion of finite rate explicitly introduced time in thermodynamics, and its significance cannot be overlooked as shown by the wealth of works devoted to what is now known as finite-time thermodynamics since the end of the 1970's. The favorable comparison of the CA efficiency to actual values led many to consider it as a universal upper bound for real heat engines, but things are not so straightforward that a simple formula may account for a variety of situations. The
On the maximum efficiency of realistic heat engines
Miranda, E N
2012-01-01
In 1975, Courzon and Ahlborn studied a Carnot engine with thermal losses and got an expression for its efficiency that described better the performance of actual heat machines than the traditional result due to Carnot. In their original derivation, time appears explicitly and this is disappointing in the framework of classical thermodynamics. In this note a derivation is given without any explicit reference to time.
Study on maximum efficiency control strategy for induction motor
无
2007-01-01
Two new techniques for effficiency-optimization control (EOC) of induction motor drives were proposed. The first method combined Loss Model and "golden section technique", which was faster than the available methods. Secondly, the low-frequency ripple torque due to decrease of rotor flux was compensated in a feedforward manner. If load torque or speed command changed, the efficiency search algorithm would be abandoned and the rated flux would be established to get the best transient response. The close agreement between the simulation and the experimental results confirmed the validity and usefulness of the proposed techniques.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model
Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong
2016-07-01
We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T1 and T2 (autonomous heat engine performing work against the external driving force. Linearity of the system enables us to examine thermodynamic properties of the engine analytically. We find that the efficiency of the engine at maximum power ηM P is given by ηM P=1 -√{T2/T1 } . This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of ηM P to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η =η ¯ and increases monotonically until it reaches plateaus when η ≤ηL and η ≥ηR with model-dependent parameters ηR and ηL.
Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit;
2014-01-01
to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...
Quan, H T
2014-06-01
We study the maximum efficiency of a heat engine based on a small system. It is revealed that due to the finiteness of the system, irreversibility may arise when the working substance contacts with a heat reservoir. As a result, there is a working-substance-dependent correction to the Carnot efficiency. We derive a general and simple expression for the maximum efficiency of a Carnot cycle heat engine in terms of the relative entropy. This maximum efficiency approaches the Carnot efficiency asymptotically when the size of the working substance increases to the thermodynamic limit. Our study extends Carnot's result of the maximum efficiency to an arbitrary working substance and elucidates the subtlety of thermodynamic laws in small systems.
Y. Haseli
2016-05-01
Full Text Available The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov’s engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Recent advance on the efficiency at maximum power of heat engines
Tu Zhan-Chun
2012-01-01
This review reports several key advances on the theoretical investigations of efficiency at maximum power of heat engines in the past five years.The analytical results of efficiency at maximum power for the Curzon-Ahlborn heat engine,the stochastic heat engine constructed from a Brownian particle,and Feynman's ratchet as a heat engine are presented.It is found that:the efficiency at maximum power exhibits universal behavior at small relative temperature differences; the lower and the upper bounds might exist under quite general conditions; and the problem of efficiency at maximum power comes down to seeking for the minimum irreversible entropy production in each finite-time isothermal process for a given time.
Abhijit Sinha
2014-01-01
Full Text Available A comparative analysis on thermodynamic efficiency based on maximum power & power density conditions have been performed for a solar-driven Carnot heat engine with internal irreversibility. In this analysis, the heat transfer from the hot reservoir is to be in the radiation mode and the heat transfer to the cold reservoir is to be in the convection mode. The thermodynamic efficiency function, power & power density functions have been derived and maximization of the power functions have been performed for various design parameters. From the optimum conditions, the thermal efficiencies at maximum power and power densities have been obtained. The effects of internal irreversibility, extreme temperature ratios & specific engine size in area ratio between the hot & cold reservoirs as various design parameters on thermodynamic efficiencies have been investigated for both the conditions. The efficiencies have been compared with Curzon-Ahlborn & Carnot efficiencies respectively.The analysis showed that the efficiency at maximum power output is greater than the efficiency at maximum power density. And the efficiencies can be greater than the Curzon- Ahlborn`s efficiency only for low values of design parameters.
Efficiency at maximum power for an Otto engine with ideal feedback
Wang, Honghui; He, Jizhou; Wang, Jianhui; Wu, Zhaoqi
2016-10-01
We propose an Otto heat engine that undergoes processes involving a special class of feedback and analyze theoretically its response. We use stochastic thermodynamics to determine the performance characteristics of the heat engine and indicate the possibility that its maximum efficiency can surpass the Carnot value. The analytical expression for efficiency at maximum power, including the effects resulting from feedback, reduces to that previously derived based on an engine without feedback.
The maximum efficiency of nano heat engines depends on more than temperature
Woods, Mischa; Ng, Nelly; Wehner, Stephanie
Sadi Carnot's theorem regarding the maximum efficiency of heat engines is considered to be of fundamental importance in the theory of heat engines and thermodynamics. Here, we show that at the nano and quantum scale, this law needs to be revised in the sense that more information about the bath other than its temperature is required to decide whether maximum efficiency can be achieved. In particular, we derive new fundamental limitations of the efficiency of heat engines at the nano and quantum scale that show that the Carnot efficiency can only be achieved under special circumstances, and we derive a new maximum efficiency for others. A preprint can be found here arXiv:1506.02322 [quant-ph] Singapore's MOE Tier 3A Grant & STW, Netherlands.
Systematic measurement of maximum efficiencies and detuning lengths at the JAERI free-electron laser
Nishimori, N; Nagai, R; Minehara, E J
2002-01-01
We made a systematic measurement of efficiency detuning curves at several gain and loss parameters. The absolute detuning length (delta L) of an optical cavity was measured within an accuracy of 0.1 mu m around the maximum efficiency by a pulse-stacking method using an external laser. The FEL gain was controlled by the undulator gap instead of bunch charge, because we can change the gain rapidly while maintaining constant electron bunch conditions. For the high-gain and low-loss regions, the maximum efficiency is obtained at delta L=0 mu m and is larger than the value derived from the theoretical scaling law in the superradiant regime, while for the low-gain region the maximum efficiency is obtained for delta L shorter than 0 mu m and is similar to the scaling law.
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Batista, R. Alves; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Baeuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertania, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S.; Blanco, M.; Bleve, C.; Bluemer, H.; Bohacova, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceicao, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Diaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Hasankiadeh, Q. Dorosti; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Luis, P. Facal San; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipcic, A.; Fox, B. D.; Fratu, O.; Froehlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; Garcia, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gomez Berisso, M.; Gomez Vitale, P. F.; Goncalves, P.; Gonzalez, J. G.; Gonzalez, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hoerandel, J. R.; Horvath, P.; Hrabovsky, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kaeaepae, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kegl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kroemer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leao, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopez, R.; Lopez Agueera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martinez Bravo, O.; Martraire, D.; Masias Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Micanovic, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Ragaigne, D. Monnier; Montanet, F.; Morello, C.; Mostafa, M.; Moura, C. A.; Muller, M. A.; Mueller, G.; Mueller, S.; Muenchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nozka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodriguez-Frias, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sanchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovanek, P.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Smialkowski, A.; Smida, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijaervi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tome, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdes Galicia, J. F.; Valino, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cardenas, B.; Varner, G.; Vazquez, J. R.; Vazquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villasenor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.
2014-01-01
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (X-max), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the d
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Batista, R. Alves; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Baeuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertania, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S.; Blanco, M.; Bleve, C.; Bluemer, H.; Bohacova, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceicao, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Diaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Hasankiadeh, Q. Dorosti; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Luis, P. Facal San; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipcic, A.; Fox, B. D.; Fratu, O.; Froehlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; Garcia, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gomez Berisso, M.; Gomez Vitale, P. F.; Goncalves, P.; Gonzalez, J. G.; Gonzalez, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hoerandel, J. R.; Horvath, P.; Hrabovsky, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kaeaepae, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kegl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kroemer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leao, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopez, R.; Lopez Agueera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martinez Bravo, O.; Martraire, D.; Masias Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Micanovic, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Ragaigne, D. Monnier; Montanet, F.; Morello, C.; Mostafa, M.; Moura, C. A.; Muller, M. A.; Mueller, G.; Mueller, S.; Muenchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nozka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodriguez-Frias, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sanchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovanek, P.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Smialkowski, A.; Smida, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijaervi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tome, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdes Galicia, J. F.; Valino, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cardenas, B.; Varner, G.; Vazquez, J. R.; Vazquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villasenor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.
2014-01-01
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (X-max), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the d
An Efficient Algorithm for Maximum-Entropy Extension of Block-Circulant Covariance Matrices
Carli, Francesca P; Pavon, Michele; Picci, Giorgio
2011-01-01
This paper deals with maximum entropy completion of partially specified block-circulant matrices. Since positive definite symmetric circulants happen to be covariance matrices of stationary periodic processes, in particular of stationary reciprocal processes, this problem has applications in signal processing, in particular to image modeling. Maximum entropy completion is strictly related to maximum likelihood estimation subject to certain conditional independence constraints. The maximum entropy completion problem for block-circulant matrices is a nonlinear problem which has recently been solved by the authors, although leaving open the problem of an efficient computation of the solution. The main contribution of this paper is to provide an efficient algorithm for computing the solution. Simulation shows that our iterative scheme outperforms various existing approaches, especially for large dimensional problems. A necessary and sufficient condition for the existence of a positive definite circulant completio...
Efficiency at maximum power output of quantum heat engines under finite-time operation
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, ηm, of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), ηm becomes identical to the Carnot efficiency ηC=1-Tc/Th. For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency ηm at maximum power output is bounded from above by ηC/(2-ηC) and from below by ηC/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Efficiency at maximum power output of quantum heat engines under finite-time operation.
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, η(m), of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures T(h) and T(c), respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), η(m) becomes identical to the Carnot efficiency η(C)=1-T(c)/T(h). For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency η(m) at maximum power output is bounded from above by η(C)/(2-η(C)) and from below by η(C)/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency η(CA)=1-√(T(c)/T(h)) is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Osterloh, Frank E
2014-10-02
The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism.
Latella Ivan
2014-01-01
Full Text Available We analyse the process of conversion of near-field thermal radiation into usable work by considering the radiation emitted between two planar sources supporting surface phonon-polaritons. The maximum work flux that can be extracted from the radiation is obtained taking into account that the spectral flux of modes is mainly dominated by these surface modes. The thermodynamic efficiencies are discussed and an upper bound for the first law efficiency is obtained for this process.
Ouerdane, Henni; Goupil, Christophe; Lecoeur, Philippe
2014-01-01
[...] By the beginning of the 20th century, the principles of thermodynamics were summarized into the so-called four laws, which were, as it turns out, definitive negative answers to the doomed quests for perpetual motion machines. As a matter of fact, one result of Sadi Carnot's work was precisely that the heat-to-work conversion process is fundamentally limited; as such, it is considered as a first version of the second law of thermodynamics. Although it was derived from Carnot's unrealistic model, the upper bound on the thermodynamic conversion efficiency, known as the Carnot efficiency, became a paradigm as the next target after the failure of the perpetual motion ideal. In the 1950's, Jacques Yvon published a conference paper containing the necessary ingredients for a new class of models, and even a formula, not so different from that of Carnot's efficiency, which later would become the new efficiency reference. Yvon's first analysis [...] went fairly unnoticed for twenty years, until Frank Curzon and Bo...
Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer
Bo-Hee Choi
2016-01-01
Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.
3D Navier-Stokes Simulations of a rotor designed for Maximum Aerodynamic Efficiency
Johansen, Jeppe; Madsen, Helge. Aa.; Gaunaa, Mac
2007-01-01
The present paper describes the design of a three-bladed wind turbine rotor taking into account maximum aerodynamic efficiency only and not considering structural as well as offdesign issues. The rotor was designed assuming constant induction for most of the blade span, but near the tip region a ...
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures Th and Tc (Otto engine working in the linear-response regime.
Wang, Jianhui; He, Jizhou
2012-11-01
We investigate the efficiency at the maximum power output (EMP) of an irreversible Carnot engine performing finite-time cycles between two reservoirs at constant temperatures T(h) and T(c) (Carnot efficiency, whether the internally dissipative friction is considered or not. When dissipations of two "isothermal" and two "adiabatic" processes are symmetric, respectively, and the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation, the Curzon-Ahlborn (CA) efficiency η(CA) = 1-sqrt[T(c)/T(h)] is derived.
Efficiency at and near maximum power of low-dissipation heat engines.
Holubec, Viktor; Ryabov, Artem
2015-11-01
A universality in optimization of trade-off between power and efficiency for low-dissipation Carnot cycles is presented. It is shown that any trade-off measure expressible in terms of efficiency and the ratio of power to its maximum value can be optimized independently of most details of the dynamics and of the coupling to thermal reservoirs. The result is demonstrated on two specific trade-off measures. The first one is designed for finding optimal efficiency for a given output power and clearly reveals diseconomy of engines working at maximum power. As the second example we derive universal lower and upper bounds on the efficiency at maximum trade-off given by the product of power and efficiency. The results are illustrated on a model of a diffusion-based heat engine. Such engines operate in the low-dissipation regime given that the used driving minimizes the work dissipated during the isothermal branches. The peculiarities of the corresponding optimization procedure are reviewed and thoroughly discussed.
Efficiency at and near maximum power of low-dissipation heat engines
Holubec, Viktor; Ryabov, Artem
2015-11-01
A universality in optimization of trade-off between power and efficiency for low-dissipation Carnot cycles is presented. It is shown that any trade-off measure expressible in terms of efficiency and the ratio of power to its maximum value can be optimized independently of most details of the dynamics and of the coupling to thermal reservoirs. The result is demonstrated on two specific trade-off measures. The first one is designed for finding optimal efficiency for a given output power and clearly reveals diseconomy of engines working at maximum power. As the second example we derive universal lower and upper bounds on the efficiency at maximum trade-off given by the product of power and efficiency. The results are illustrated on a model of a diffusion-based heat engine. Such engines operate in the low-dissipation regime given that the used driving minimizes the work dissipated during the isothermal branches. The peculiarities of the corresponding optimization procedure are reviewed and thoroughly discussed.
Optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits.
Ozkan, Fahri; Tuna, M Cihat; Baylar, Ahmet; Ozturk, Mualla
2014-01-01
Oxygen is an important component of water quality and its ability to sustain life. Water aeration is the process of introducing air into a body of water to increase its oxygen saturation. Water aeration can be accomplished in a variety of ways, for instance, closed-conduit aeration. High-speed flow in a closed conduit involves air-water mixture flow. The air flow results from the subatmospheric pressure downstream of the gate. The air entrained by the high-speed flow is supplied by the air vent. The air entrained into the flow in the form of a large number of bubbles accelerates oxygen transfer and hence also increases aeration efficiency. In the present work, the optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits was studied experimentally. Results showed that aeration efficiency increased with the air-demand ratio to a certain point and then aeration efficiency did not change with a further increase of the air-demand ratio. Thus, there was an optimum value for the air-demand ratio, depending on the Froude number, which provides maximum aeration efficiency. Furthermore, a design formula for aeration efficiency was presented relating aeration efficiency to the air-demand ratio and Froude number.
Efficiency at maximum power of thermochemical engines with near-independent particles.
Luo, Xiaoguang; Liu, Nian; Qiu, Teng
2016-03-01
Two-reservoir thermochemical engines are established by using near-independent particles (including Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein particles) as the working substance. Particle and heat fluxes can be formed based on the temperature and chemical potential gradients between two different reservoirs. A rectangular-type energy filter with width Γ is introduced for each engine to weaken the coupling between the particle and heat fluxes. The efficiency at maximum power of each particle system decreases monotonously from an upper bound η(+) to a lower bound η(-) when Γ increases from 0 to ∞. It is found that the η(+) values for all three systems are bounded by η(C)/2 ≤ η(+) ≤ η(C)/(2-η(C)) due to strong coupling, where η(C) is the Carnot efficiency. For the Bose-Einstein system, it is found that the upper bound is approximated by the Curzon-Ahlborn efficiency: η(CA)=1-sqrt[1-η(C)]. When Γ → ∞, the intrinsic maximum powers are proportional to the square of the temperature difference of the two reservoirs for all three systems, and the corresponding lower bounds of efficiency at maximum power can be simplified in the same form of η(-)=η(C)/[1+a(0)(2-η(C))].
Efficiency at maximum power of thermochemical engines with near-independent particles
Luo, Xiaoguang; Liu, Nian; Qiu, Teng
2016-03-01
Two-reservoir thermochemical engines are established by using near-independent particles (including Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein particles) as the working substance. Particle and heat fluxes can be formed based on the temperature and chemical potential gradients between two different reservoirs. A rectangular-type energy filter with width Γ is introduced for each engine to weaken the coupling between the particle and heat fluxes. The efficiency at maximum power of each particle system decreases monotonously from an upper bound η+ to a lower bound η- when Γ increases from 0 to ∞ . It is found that the η+ values for all three systems are bounded by ηC/2 ≤η+≤ηC/(2 -ηC ) due to strong coupling, where ηC is the Carnot efficiency. For the Bose-Einstein system, it is found that the upper bound is approximated by the Curzon-Ahlborn efficiency: ηCA=1 -√{1 -ηC } . When Γ →∞ , the intrinsic maximum powers are proportional to the square of the temperature difference of the two reservoirs for all three systems, and the corresponding lower bounds of efficiency at maximum power can be simplified in the same form of η-=ηC/[1 +a0(2 -ηC ) ] .
The ACT{sup 2} project: Demonstration of maximum energy efficiency in real buildings
Crawley, D.B. [Pacific Northwest Lab., Richland, WA (United States); Krieg, B.L. [Pacific Gas and Electric Co., San Ramon, CA (United States)
1991-11-01
A large US utility recently began a project to determine whether the use of new energy-efficient end-use technologies and systems would economically achieve substantial energy savings (perhaps as high as 75% over current practice). Using a field-based demonstration approach, the Advanced Customer Technology Test (ACT{sup 2}) for Maximum Energy Efficiency is providing information on the maximum energy savings possible when integrated packages of new high-efficiency end-use technologies are incorporated into commercial and residential buildings and industrial and agricultural processes. This paper details the underlying rationale, approach, results to date, and future plans for ACT{sup 2}. The ultimate goal is energy efficiency (doing more with less energy) rather than energy conservation (freezing in the dark). In this paper, we first explain why a major United States utility is committed to pursuing demand-side management so aggressively. Next, we discuss the approach the utility chose for conducting the ACT{sup 2} project. We then review results obtained to date from the project`s pilot demonstration site. Last, we describe other related demonstration projects being proposed by the utility.
The ACT sup 2 project: Demonstration of maximum energy efficiency in real buildings
Crawley, D.B. (Pacific Northwest Lab., Richland, WA (United States)); Krieg, B.L. (Pacific Gas and Electric Co., San Ramon, CA (United States))
1991-11-01
A large US utility recently began a project to determine whether the use of new energy-efficient end-use technologies and systems would economically achieve substantial energy savings (perhaps as high as 75% over current practice). Using a field-based demonstration approach, the Advanced Customer Technology Test (ACT{sup 2}) for Maximum Energy Efficiency is providing information on the maximum energy savings possible when integrated packages of new high-efficiency end-use technologies are incorporated into commercial and residential buildings and industrial and agricultural processes. This paper details the underlying rationale, approach, results to date, and future plans for ACT{sup 2}. The ultimate goal is energy efficiency (doing more with less energy) rather than energy conservation (freezing in the dark). In this paper, we first explain why a major United States utility is committed to pursuing demand-side management so aggressively. Next, we discuss the approach the utility chose for conducting the ACT{sup 2} project. We then review results obtained to date from the project's pilot demonstration site. Last, we describe other related demonstration projects being proposed by the utility.
Efficiency at maximum power output of linear irreversible Carnot-like heat engines.
Wang, Yang; Tu, Z C
2012-01-01
The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each "isothermal" process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form η(mP)=η(C)/(2-γη(C)), where η(C) is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of η(mP) is bounded between η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys. 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett. 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of η(mP)=η(C)/(2-γη(C)) as well as the existence of two bounds, η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)).
The maximum power efficiency 1-√τ: Research, education, and bibliometric relevance
Calvo Hernández, A.; Roco, J. M. M.; Medina, A.; Velasco, S.; Guzmán-Vargas, L.
2015-07-01
The well-known efficiency at maximum power for a cyclic system working between hot T h and low T c temperatures given by the equation 1-√ τ( τ= T c /T h), has become a landmark result with regards to the thermodynamic optimization of a great variety of energy converters. Its wide applicability and sole dependence on the external heat bath temperatures (as the Carnot efficiency does) allows for an easy comparison with experimental efficiencies leading to a striking fair agreement. Reversible, finite-time, and linear-irreversible derivations are analyzed in order to show a broader perspective about its meaning from both researching and pedagogical point of views. Its scientific relevance and historical development are also analyzed in this work by means of some bibliometric data. This article is supplemented with comments by Hong Qian and a final reply by the authors.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling.
Yan, H; Guo, Hao
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson engines. While in the short contact time limit, which corresponds to the Carnot cycle, the same efficiency bounds as that from Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] are derived. In both cases, the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases. This might provide instructions for designing real engines.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling
Yan, H.; Guo, Hao
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson engines. While in the short contact time limit, which corresponds to the Carnot cycle, the same efficiency bounds as that from Esposito [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)] are derived. In both cases, the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases. This might provide instructions for designing real engines.
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures T(h) and T(c) (power based on these two different kinds of quantum systems are bounded from the upper side by the same expression η(mp)≤η(+)≡η(C)(2)/[η(C)-(1-η(C))ln(1-η(C))] with η(C)=1-T(c)/T(h) as the Carnot efficiency. This expression η(mp) possesses the same universality of the CA efficiency η(CA)=1-√(1-η(C)) at small relative temperature difference. Within the context of irreversible thermodynamics, we calculate the Onsager coefficients and show that the value of η(CA) is indeed the upper bound of EMP for an Otto engine working in the linear-response regime.
Efficiency at maximum power output for an engine with a passive piston
Sano, Tomohiko G.; Hayakawa, Hisao
2016-08-01
Efficiency at maximum power (MP) output for an engine with a passive piston without mechanical controls between two reservoirs is studied theoretically. We enclose a hard core gas partitioned by a massive piston in a temperature-controlled container and analyze the efficiency at MP under a heating and cooling protocol without controlling the pressure acting on the piston from outside. We find the following three results: (i) The efficiency at MP for a dilute gas is close to the Chambadal-Novikov-Curzon-Ahlborn (CNCA) efficiency if we can ignore the sidewall friction and the loss of energy between a gas particle and the piston, while (ii) the efficiency for a moderately dense gas becomes smaller than the CNCA efficiency even when the temperature difference of the reservoirs is small. (iii) Introducing the Onsager matrix for an engine with a passive piston, we verify that the tight coupling condition for the matrix of the dilute gas is satisfied, while that of the moderately dense gas is not satisfied because of the inevitable heat leak. We confirm the validity of these results using the molecular dynamics simulation and introducing an effective mean-field-like model which we call the stochastic mean field model.
Selva, J
2011-01-01
This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.
Apertet, Y; Ouerdane, H; Goupil, C; Lecoeur, Ph
2012-03-01
Energy conversion efficiency at maximum output power, which embodies the essential characteristics of heat engines, is the main focus of the present work. The so-called Curzon and Ahlborn efficiency η(CA) is commonly believed to be an absolute reference for real heat engines; however, a different but general expression for the case of stochastic heat engines, η(SS), was recently found and then extended to low-dissipation engines. The discrepancy between η(CA) and η(SS) is here analyzed considering different irreversibility sources of heat engines, of both internal and external types. To this end, we choose a thermoelectric generator operating in the strong-coupling regime as a physical system to qualitatively and quantitatively study the impact of the nature of irreversibility on the efficiency at maximum output power. In the limit of pure external dissipation, we obtain η(CA), while η(SS) corresponds to the case of pure internal dissipation. A continuous transition between from one extreme to the other, which may be operated by tuning the different sources of irreversibility, also is evidenced.
Stysley, Paul; Coyle, Barry; Clarke, Greg; Poulios, Demetrios; Kay, Richard
2015-01-01
The Global Ecosystems Dynamics Investigation (GEDI) is a planned mission sending a LIDAR instrument to the International Space Station that will employ three NASA laser transmitters. This instrument will produce parallel tracks on the Earth's surface that will provide global 3D vegetation canopy measurements. To meet the mission goals a total of 5 High Output Maximum Efficiency Resonator lasers will to be built (1 ETU + 3 Flight + 1 spare) in-house at NASA-GSFC. This presentation will summarize the HOMER design, the testing the design has completed in the past, and the plans to successfully build the units needed for the GEDI mission.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling
Yan, H; Guo, H.
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson eng...
Efficiency at maximum power of a quantum heat engine based on two coupled oscillators.
Wang, Jianhui; Ye, Zhuolin; Lai, Yiming; Li, Weisheng; He, Jizhou
2015-06-01
We propose and theoretically investigate a system of two coupled harmonic oscillators as a heat engine. We show how these two coupled oscillators within undamped regime can be controlled to realize an Otto cycle that consists of two adiabatic and two isochoric processes. During the two isochores the harmonic system is embedded in two heat reservoirs at constant temperatures T(h) and T(c)(semigroup approach to model the thermal relaxation dynamics along the two isochoric processes, and we find the upper bound of efficiency at maximum power (EMP) η* to be a function of the Carnot efficiency η(C)(=1-T(c)/T(h)): η*≤η(+)≡η(C)(2)/[η(C)-(1-η(C))ln(1-η(C))], identical to those previously derived from ideal (noninteracting) microscopic, mesoscopic, and macroscopic systems.
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
Efficient Photovoltaic System Maximum Power Point Tracking Using a New Technique
Mehdi Seyedmahmoudian
2016-03-01
Full Text Available Partial shading is an unavoidable condition which significantly reduces the efficiency and stability of a photovoltaic (PV system. When partial shading occurs the system has multiple-peak output power characteristics. In order to track the global maximum power point (GMPP within an appropriate period a reliable technique is required. Conventional techniques such as hill climbing and perturbation and observation (P&O are inadequate in tracking the GMPP subject to this condition resulting in a dramatic reduction in the efficiency of the PV system. Recent artificial intelligence methods have been proposed, however they have a higher computational cost, slower processing time and increased oscillations which results in further instability at the output of the PV system. This paper proposes a fast and efficient technique based on Radial Movement Optimization (RMO for detecting the GMPP under partial shading conditions. The paper begins with a brief description of the behavior of PV systems under partial shading conditions followed by the introduction of the new RMO-based technique for GMPP tracking. Finally, results are presented to demonstration the performance of the proposed technique under different partial shading conditions. The results are compared with those of the PSO method, one of the most widely used methods in the literature. Four factors, namely convergence speed, efficiency (power loss reduction, stability (oscillation reduction and computational cost, are considered in the comparison with the PSO technique.
Bounds and phase diagram of efficiency at maximum power for tight-coupling molecular motors.
Tu, Z C
2013-02-01
The efficiency at maximum power (EMP) for tight-coupling molecular motors is investigated within the framework of irreversible thermodynamics. It is found that the EMP depends merely on the constitutive relation between the thermodynamic current and force. The motors are classified into four generic types (linear, superlinear, sublinear, and mixed types) according to the characteristics of the constitutive relation, and then the corresponding ranges of the EMP for these four types of molecular motors are obtained. The exact bounds of the EMP are derived and expressed as the explicit functions of the free energy released by the fuel in each motor step. A phase diagram is constructed which clearly shows how the region where the parameters (the load distribution factor and the free energy released by the fuel in each motor step) are located can determine whether the value of the EMP is larger or smaller than 1/2. This phase diagram reveals that motors using ATP as fuel under physiological conditions can work at maximum power with higher efficiency (> 1/2) for a small load distribution factor (< 0.1).
Simulation of maximum light use efficiency for some typical vegetation types in China
无
2006-01-01
Maximum light use efficiency (εmax) is a key parameter for the estimation of net primary productivity (NPP) derived from remote sensing data. There are still many divergences about its value for each vegetation type. The εmax for some typical vegetation types in China is simulated using a modified least squares function based on NOAA/AVHRR remote sensing data and field-observed NPP data. The vegetation classification accuracy is introduced to the process. The sensitivity analysis of εmax to vegetation classification accuracy is also conducted. The results show that the simulated values of εmax are greater than the value used in CASA model, and less than the values simulated with BIOME-BGC model. This is consistent with some other studies. The relative error of εmax resulting from classification accuracy is -5.5%―8.0%. This indicates that the simulated values of εmax are reliable and stable.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Cushing, Scott K; Bristow, Alan D; Wu, Nianqiang
2015-11-28
Plasmonics can enhance solar energy conversion in semiconductors by light trapping, hot electron transfer, and plasmon-induced resonance energy transfer (PIRET). The multifaceted response of the plasmon and multiple interaction pathways with the semiconductor makes optimization challenging, hindering design of efficient plasmonic architectures. Therefore, in this paper we use a density matrix model to capture the interplay between scattering, hot electrons, and dipole-dipole coupling through the plasmon's dephasing, including both the coherent and incoherent dynamics necessary for interactions on the plasmon's timescale. The model is extended to Shockley-Queisser limit calculations for both photovoltaics and solar-to-chemical conversion, revealing the optimal application of each enhancement mechanism based on plasmon energy, semiconductor energy, and plasmon dephasing. The results guide application of plasmonic solar-energy harvesting, showing which enhancement mechanism is most appropriate for a given semiconductor's weakness, and what nanostructures can achieve the maximum enhancement.
Kleidon, Axel
2009-06-01
The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.
2008-01-01
Optimal configuration of a class of endoreversible heat engines with fixed duration,input energy and radiative heat transfer law (q∝Δ(T4)) is determined. The optimal cycle that maximizes the efficiency of the heat engine is obtained by using opti-mal-control theory,and the differential equations are solved by the Taylor series expansion. It is shown that the optimal cycle has eight branches including two isothermal branches,four maximum-efficiency branches,and two adiabatic branches. The interval of each branch is obtained,as well as the solutions of the temperatures of the heat reservoirs and the working fluid. A numerical example is given. The obtained results are compared with those obtained with the Newton’s heat transfer law for the maximum efficiency objective,those with linear phe-nomenological heat transfer law for the maximum efficiency objective,and those with radiative heat transfer law for the maximum power output objective.
SONG HanJiang; CHEN LinGen; SUN FengRui
2008-01-01
Optimal configuration of a class of endoreversible heat engines with fixed duration, input energy and radiative heat transfer law (q∝△(T4)) is determined. The optimal cycle that maximizes the efficiency of the heat engine is obtained by using opti-mal-control theory, and the differential equations are solved by the Taylor series expansion. It is shown that the optimal cycle has eight branches including two isothermal branches, four maximum-efficiency branches, and two adiabatic branches. The interval of each branch is obtained, as well as the solutions of the temperatures of the heat reservoirs and the working fluid. A numerical example is given. The obtained results are compared with those obtained with the Newton's heat transfer law for the maximum efficiency objective, those with linear phe-nomenological heat transfer law for the maximum efficiency objective, and those with radiative heat transfer law for the maximum power output objective.
Verdon-Kidd, D. C.; Kiem, A. S.
2015-12-01
Rainfall intensity-frequency-duration (IFD) relationships are commonly required for the design and planning of water supply and management systems around the world. Currently, IFD information is based on the "stationary climate assumption" that weather at any point in time will vary randomly and that the underlying climate statistics (including both averages and extremes) will remain constant irrespective of the period of record. However, the validity of this assumption has been questioned over the last 15 years, particularly in Australia, following an improved understanding of the significant impact of climate variability and change occurring on interannual to multidecadal timescales. This paper provides evidence of regime shifts in annual maximum rainfall time series (between 1913-2010) using 96 daily rainfall stations and 66 sub-daily rainfall stations across Australia. Furthermore, the effect of these regime shifts on the resulting IFD estimates are explored for three long-term (1913-2010) sub-daily rainfall records (Brisbane, Sydney, and Melbourne) utilizing insights into multidecadal climate variability. It is demonstrated that IFD relationships may under- or over-estimate the design rainfall depending on the length and time period spanned by the rainfall data used to develop the IFD information. It is recommended that regime shifts in annual maximum rainfall be explicitly considered and appropriately treated in the ongoing revisions of the Engineers Australia guide to estimating and utilizing IFD information, Australian Rainfall and Runoff (ARR), and that clear guidance needs to be provided on how to deal with the issue of regime shifts in extreme events (irrespective of whether this is due to natural or anthropogenic climate change). The findings of our study also have important implications for other regions of the world that exhibit considerable hydroclimatic variability and where IFD information is based on relatively short data sets.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Aab, A.; et al.
2014-12-31
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (Xmax), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the distributions with predictions from a variety of hadronic interaction models for variations in the composition of the primary cosmic rays and examining the quality of the fit. Regardless of what interaction model is assumed, we find that our data are not well described by a mix of protons and iron nuclei over most of the energy range. Acceptable fits can be obtained when intermediate masses are included, and when this is done consistent results for the proton and iron-nuclei contributions can be found using the available models. We observe a strong energy dependence of the resulting proton fractions, and find no support from any of the models for a significant contribution from iron nuclei. However, we also observe a significant disagreement between the models with respect to the relative contributions of the intermediate components.
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertania, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D’Olivo, J. C.; Dorofeev, A.; Dorosti Hasankiadeh, Q.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; González, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Müller, S.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pȩkala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.
2014-12-01
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (Xmax), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the distributions with predictions from a variety of hadronic interaction models for variations in the composition of the primary cosmic rays and examining the quality of the fit. Regardless of what interaction model is assumed, we find that our data are not well described by a mix of protons and iron nuclei over most of the energy range. Acceptable fits can be obtained when intermediate masses are included, and when this is done consistent results for the proton and iron-nuclei contributions can be found using the available models. We observe a strong energy dependence of the resulting proton fractions, and find no support from any of the models for a significant contribution from iron nuclei. However, we also observe a significant disagreement between the models with respect to the relative contributions of the intermediate components.
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertania, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Dorosti Hasankiadeh, Q.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; González, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Müller, S.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; PÈ©kala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.; Pierre Auger Collaboration*
2014-12-01
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (Xmax ), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the distributions with predictions from a variety of hadronic interaction models for variations in the composition of the primary cosmic rays and examining the quality of the fit. Regardless of what interaction model is assumed, we find that our data are not well described by a mix of protons and iron nuclei over most of the energy range. Acceptable fits can be obtained when intermediate masses are included, and when this is done consistent results for the proton and iron-nuclei contributions can be found using the available models. We observe a strong energy dependence of the resulting proton fractions, and find no support from any of the models for a significant contribution from iron nuclei. However, we also observe a significant disagreement between the models with respect to the relative contributions of the intermediate components.
The Maximum Effective Moment Criterion (MEMC) and Its Implications in Structural Geology
无
2006-01-01
The Mohr-Coulomb criterion has been widely used to explain formation of fractures.However, it fails to explain large strain deformation that widely occurs in nature. There is presently a σ1-σ3 represents the yield strength of the related rock, L is a unit length and α is the angle between σ1and deformation bands. This criterion demonstrates that the maximum value appears at angles of ±54.7° to σ1 and there is a slight difference in the moment in the range of 55°±10°. The range covers the whole observations available from nature and experiments. Its major implications include: (1) it can be used to determine the stress state when the related deformation features formed; (2) it provides a new approach to determine the Wk of the related ductile shear zone if only the ratio of the vorticity and strain rate remains fixed; (3) It can be used to explain (a) the obtuse angle in the contraction direction of conjugate kink-bands and extensional crenulation cleavages, (b) formation of low-angle normal faults and high-angle reverse faults, (c) lozenge ductile shear zones in basement terranes, (d) some crocodile structures in seismic profiles and (e) detachment folds in foreland basins.
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
An efficient approximation algorithm for finding a maximum clique using Hopfield network learning.
Wang, Rong Long; Tang, Zheng; Cao, Qi Ping
2003-07-01
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Izumida, Yuki; Okuda, Koji
2014-05-01
We formulate the work output and efficiency for linear irreversible heat engines working between a finite-sized hot heat source and an infinite-sized cold heat reservoir until the total system reaches the final thermal equilibrium state with a uniform temperature. We prove that when the heat engines operate at the maximum power under the tight-coupling condition without heat leakage the work output is just half of the exergy, which is known as the maximum available work extracted from a heat source. As a consequence, the corresponding efficiency is also half of its quasistatic counterpart.
Maheshwari, Govind; Chaudhary, S; Somani, S.K
2010-01-01
The efficient power, defined as the product of power output and efficiency of the engine, is taken as the objective for performance analysis and optimization of an endoreversible combined Carnot heat...
Sheng, Shiqi; Tu, Z C
2015-02-01
We present a unified perspective on nonequilibrium heat engines by generalizing nonlinear irreversible thermodynamics. For tight-coupling heat engines, a generic constitutive relation for nonlinear response accurate up to the quadratic order is derived from the stalling condition and the symmetry argument. By applying this generic nonlinear constitutive relation to finite-time thermodynamics, we obtain the necessary and sufficient condition for the universality of efficiency at maximum power, which states that a tight-coupling heat engine takes the universal efficiency at maximum power up to the quadratic order if and only if either the engine symmetrically interacts with two heat reservoirs or the elementary thermal energy flowing through the engine matches the characteristic energy of the engine. Hence we solve the following paradox: On the one hand, the quadratic term in the universal efficiency at maximum power for tight-coupling heat engines turned out to be a consequence of symmetry [Esposito, Lindenberg, and Van den Broeck, Phys. Rev. Lett. 102, 130602 (2009); Sheng and Tu, Phys. Rev. E 89, 012129 (2014)]; On the other hand, typical heat engines such as the Curzon-Ahlborn endoreversible heat engine [Curzon and Ahlborn, Am. J. Phys. 43, 22 (1975)] and the Feynman ratchet [Tu, J. Phys. A 41, 312003 (2008)] recover the universal efficiency at maximum power regardless of any symmetry.
Li, Yonghui; Wu, Qiuwei; Zhu, Haiyu
2015-01-01
Based on the benchmark solid oxide fuel cell (SOFC) dynamic model for power system studies and the analysis of the SOFC operating conditions, the nonlinear programming (NLP) optimization method was used to determine the maximum electrical efficiency of the grid-connected SOFC subject...
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik;
2016-01-01
This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms...
Bergboer, N.H; Verdult, V.; Verhaegen, M.H.G.
2002-01-01
We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting parame
Maximum efficiency of steady-state heat engines at arbitrary power.
Ryabov, Artem; Holubec, Viktor
2016-05-01
We discuss the efficiency of a heat engine operating in a nonequilibrium steady state maintained by two heat reservoirs. Within the general framework of linear irreversible thermodynamics we derive a universal upper bound on the efficiency of the engine operating at arbitrary fixed power. Furthermore, we show that a slight decrease of the power below its maximal value can lead to a significant gain in efficiency. The presented analysis yields the exact expression for this gain and the corresponding upper bound.
Ortega-Casanova, Joaquin; Fernandez-Feria, Ramon
2015-11-01
The thrust generated by two heaving plates in tandem is analysed for two particular sets of configurations of interest in forward flight: a plunging leading plate with the trailing plate at rest, and the two plates heaving with the same frequency and amplitude, but varying the phase difference. The thrust efficiency of the leading plate is augmented in relation to a single plate heaving with the same frequency and amplitude in most cases. In the first configuration, we characterize the range of nondimensional heaving frequencies and amplitudes of the leading plate for which the stationary trailing plate contributes positively to the global thrust. The maximum global thrust efficiency, reached for an advance ratio slightly less than unity and a reduced frequency close to 5, is about the same as the maximum efficiency for an isolated plate. But for low frequencies the tandem configuration with the trailing plate at rest is more thrust efficient than the isolated plate. In the second configuration, we find that the maximum thrust efficiency is reached for a phase lag of 180o (counterstroking), particularly for an advance ratio unity and a reduced frequency 4.4, and it is practically the same as in the other configuration and that for a single plate. Supported by the Ministerio de Economía y Competitividad of Spain Grant no. DPI2013-40479-P.
Ruslana Sushko
2015-08-01
Full Text Available Purpose: to identify the factors of efficiency of competitive activity of highly skilled basketball players at the stage of maximum realization of individual potential. Material and Methods: in order to identify the factors that have supported the performance of Ukraine's male national team in the European Championship, data analysis and generalization of scientific and technical literature and online data, analysis of official protocols of competitive activities, analysis and generalization of best pedagogical practices, pedagogical supervision, methods of mathematical statistics were used. Results: the efficiency of competitive activity of basketball players was analyzed using such indicators as team roles, won and lost matches, scored and missed points, technical, tactical and age indicators. Conclusions: the factors of efficiency of competitive activity of highly skilled basketball players at the stage of maximum realization of individual potential were identified with regard to age indicators
Design, Development and Testing of a PC Based One Axis Sun Tracking System for Maximum Efficiency
Sonu AGARWAL
2011-08-01
Full Text Available The solar energy is a clean source of energy and the photo-voltaic (PV solar panel converts the solar radiation into voltage. The PV solar panel produces the maximum power when the incident angle of sunlight is 90°. In the present paper a PC based one axis sun tracking system has been described to keep the PV solar panel perpendicular to the incident sunlight and thus to have maximum solar power utilization. A computer controlled stepper motor has been used in the tracking system to provide motion to the photovoltaic panel. LDR has been used as photo sensor to sense the incident solar radiation. The implementation of the system has been realized by designing optical to electrical signal conversion circuit, analog to digital conversion circuit, motor driving circuit and parallel port interfacing with PC. Experimental results are also included in order to validate the system performance.
Ore concentrate line efficient operation: some energy saving implications
Ihle, Christian F. [BRASS Engineering Chile S.A., Santiago (Chile)
2009-07-01
Among the outstanding attributes slurry pipelines must have is the need to optimize production efficiency and, in particular, minimize energy consumption. In the present paper, the energy saving implications of three different factors, namely process variable uncertainties, transport control variables and pipeline availability are referred to and exemplified using an idealized Bingham-type slurry pipeline. Present examples suggest that important energy savings can be achieved with proper designs, equipment and operations scheduling. (author)
Ruikun Mai
2017-02-01
Full Text Available One of the most promising inductive power transfer applications is the wireless power supply for locomotives which may cancel the need for pantographs. In order to meet the dynamic and high power demands of wireless power supplies for locomotives, a relatively long transmitter track and multiple receivers are usually adopted. However, during the dynamic charging, the mutual inductances between the transmitter and receivers vary and the load of the locomotives also changes randomly, which dramatically affects the system efficiency. A maximum efficiency point tracking control scheme is proposed to improve the system efficiency against the variation of the load and the mutual inductances between the transmitter and receivers while considering the cross coupling between receivers. Firstly, a detailed theoretical analysis on dual receivers is carried out. Then a control scheme with three control loops is proposed to regulate the receiver currents to be the same, to regulate the output voltage and to search for the maximum efficiency point. Finally, a 2 kW prototype is established to validate the performance of the proposed method. The overall system efficiency (DC-DC efficiency reaches 90.6% at rated power and is improved by 5.8% with the proposed method under light load compared with the traditional constant output voltage control method.
Lemofouet, Sylvain; Rufer, Alfred
This paper presents a hybrid energy storage system mainly based on Compressed Air, where the storage and withdrawal of energy are done within maximum efficiency conditions. As these maximum efficiency conditions impose the level of converted power, an intermittent time-modulated operation mode is applied to the thermodynamic converter to obtain a variable converted power. A smoothly variable output power is achieved with the help of a supercapacitive auxiliary storage device used as a filter. The paper describes the concept of the system, the power-electronic interfaces and especially the Maximum Efficiency Point Tracking (MEPT) algorithm and the strategy used to vary the output power. In addition, the paper introduces more efficient hybrid storage systems where the volumetric air machine is replaced by an oil-hydraulics and pneumatics converter, used under isothermal conditions. Practical results are also presented, recorded from a low-power air motor coupled to a small DC generator, as well as from a first prototype of the hydro-pneumatic system. Some economical considerations are also made, through a comparative cost evaluation of the presented hydro-pneumatic systems and a lead acid batteries system, in the context of a stand alone photovoltaic home application. This evaluation confirms the cost effectiveness of the presented hybrid storage systems.
INVESTIGATION OF VEHICLE WHEEL ROLLING WITH MAXIMUM EFFICIENCY IN THE BRAKE MODE
D. Leontev
2011-01-01
Full Text Available Up-to-date vehicles are equipped by various systems of braking effort automatic control theparameters calculation of which do not as a rule have a rational solution. In order to increase theworking efficiency of such systems it is necessary to have the data concerning the impact of variousoperational factors on processes occurring at braking of the object of adjustment (vehicle wheel.Data availability concerning the impact of operational factors allows to decrease geometricalparameters of adjustment devices (modulators and maintain their efficient operation under variousexploitation conditions of vehicle’s motion.
Maximum-Likelihood Detection for Energy-Efficient Timing Acquisition in NB-IoT
2016-01-01
Initial timing acquisition in narrow-band IoT (NB-IoT) devices is done by detecting a periodically transmitted known sequence. The detection has to be done at lowest possible latency, because the RF-transceiver, which dominates downlink power consumption of an NB-IoT modem, has to be turned on throughout this time. Auto-correlation detectors show low computational complexity from a signal processing point of view at the price of a higher detection latency. In contrast a maximum likelihood cro...
Toward Improved Rotor-Only Axial Fans—Part II: Design Optimization for Maximum Efficiency
Sørensen, Dan Nørtoft; Thompson, M. C.; Sørensen, Jens Nørkær
2000-01-01
Numerical design optimization of the aerodynamic performance of axial fans is carried out, maximizing the efficiency in a designinterval of flow rates. Tip radius, number of blades, and angular velocity of the rotor are fixed, whereas the hub radius andspanwise distributions of chord length...
Efficient strategies for genome scanning using maximum-likelihood affected-sib-pair analysis
Holmans, P.; Craddock, N. [Univ. of Wales College of Medicine, Cardiff (United Kingdom)
1997-03-01
Detection of linkage with a systematic genome scan in nuclear families including an affected sibling pair is an important initial step on the path to cloning susceptibility genes for complex genetic disorders, and it is desirable to optimize the efficiency of such studies. The aim is to maximize power while simultaneously minimizing the total number of genotypings and probability of type I error. One approach to increase efficiency, which has been investigated by other workers, is grid tightening: a sample is initially typed using a coarse grid of markers, and promising results are followed up by use of a finer grid. Another approach, not previously considered in detail in the context of an affected-sib-pair genome scan for linkage, is sample splitting: a portion of the sample is typed in the screening stage, and promising results are followed up in the whole sample. In the current study, we have used computer simulation to investigate the relative efficiency of two-stage strategies involving combinations of both grid tightening and sample splitting and found that the optimal strategy incorporates both approaches. In general, typing half the sample of affected pairs with a coarse grid of markers in the screening stage is an efficient strategy under a variety of conditions. If Hardy-Weinberg equilibrium holds, it is most efficient not to type parents in the screening stage. If Hardy-Weinberg equilibrium does not hold (e.g., because of stratification) failure to type parents in the first stage increases the amount of genotyping required, although the overall probability of type I error is not greatly increased, provided the parents are used in the final analysis. 23 refs., 4 figs., 5 tabs.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Maximum Efficiency of Thermoelectric Heat Conversion in High-Temperature Power Devices
V. I. Khvesyuk
2016-01-01
Full Text Available Modern trends in development of aircraft engineering go with development of vehicles of the fifth generation. The features of aircrafts of the fifth generation are motivation to use new high-performance systems of onboard power supply. The operating temperature of the outer walls of engines is of 800–1000 K. This corresponds to radiation heat flux of 10 kW/m2 . The thermal energy including radiation of the engine wall may potentially be converted into electricity. The main objective of this paper is to analyze if it is possible to use a high efficiency thermoelectric conversion of heat into electricity. The paper considers issues such as working processes, choice of materials, and optimization of thermoelectric conversion. It presents the analysis results of operating conditions of thermoelectric generator (TEG used in advanced hightemperature power devices. A high-temperature heat source is a favorable factor for the thermoelectric conversion of heat. It is shown that for existing thermoelectric materials a theoretical conversion efficiency can reach the level of 15–20% at temperatures up to 1500 K and available values of Ioffe parameter being ZT = 2–3 (Z is figure of merit, T is temperature. To ensure temperature regime and high efficiency thermoelectric conversion simultaneously it is necessary to have a certain match between TEG power, temperature of hot and cold surfaces, and heat transfer coefficient of the cooling system. The paper discusses a concept of radiation absorber on the TEG hot surface. The analysis has demonstrated a number of potentialities for highly efficient conversion through using the TEG in high-temperature power devices. This work has been implemented under support of the Ministry of Education and Science of the Russian Federation; project No. 1145 (the programme “Organization of Research Engineering Activities”.
Thien-Tong Nguyen; Doyoung Byun
2008-01-01
In the "modified quasi-steady" approach, two-dimensional (2D) aerodynamic models of flapping wing motions are analyzed with focus on different types of wing rotation and different positions of rotation axis to explain the force peak at the end of each half stroke. In this model, an additional velocity of the mid chord position due to rotation is superimposed on the translational relative velocity of air with respect to the wing. This modification produces augmented forces around the end of eachstroke. For each case of the flapping wing motions with various combination of controlled translational and rotational velocities of the wing along inclined stroke planes with thin figure-of-eight trajectory, discussions focus on lift-drag evolution during one stroke cycle and efficiency of types of wing rotation. This "modified quasi-steady" approach provides a systematic analysis of various parameters and their effects on efficiency of flapping wing mechanism. Flapping mechanism with delayed rotation around quarter-chord axis is an efficient one and can be made simple by a passive rotation mechanism so that it can be useful for robotic application.
Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean
Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.
2016-04-01
Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).
Environmental implications of water efficient microcomponents in residential buildings.
Fidar, A; Memon, F A; Butler, D
2010-11-01
The Code for Sustainable Homes (CSH) in England sets out various water efficiency targets/levels, which form part of environmental performance criteria against which the sustainability of a building is measured. The code is performance based and requires reduction in per capita water consumption in households. The water efficiency related targets can be met using a range of water efficient microcomponents (WC, showers, kitchen taps, basin taps, dishwashers, washing machines, and baths). However, while the CSH aims at reducing the adverse environmental implications associated with the dwellings by promoting reduction in water consumption, little is known about the energy consumption and the environmental impacts (e. g. carbon emissions) resulting from water efficient end uses. This paper describes a methodology to evaluate the energy consumption and carbon emissions associated with the CSH's water efficiency levels. Key findings are that some 96% and 87% of energy use and carbon emissions, respectively associated with urban water provision are attributable to in-house consumption (principally related to hot water), and that achieving a defined water efficiency target does not automatically save energy or reduce carbon emissions.
Efficient and exact maximum likelihood quantisation of genomic features using dynamic programming.
Song, Mingzhou; Haralick, Robert M; Boissinot, Stéphane
2010-01-01
An efficient and exact dynamic programming algorithm is introduced to quantise a continuous random variable into a discrete random variable that maximises the likelihood of the quantised probability distribution for the original continuous random variable. Quantisation is often useful before statistical analysis and modelling of large discrete network models from observations of multiple continuous random variables. The quantisation algorithm is applied to genomic features including the recombination rate distribution across the chromosomes and the non-coding transposable element LINE-1 in the human genome. The association pattern is studied between the recombination rate, obtained by quantisation at genomic locations around LINE-1 elements, and the length groups of LINE-1 elements, also obtained by quantisation on LINE-1 length. The exact and density-preserving quantisation approach provides an alternative superior to the inexact and distance-based univariate iterative k-means clustering algorithm for discretisation.
Aragon-Gonzalez, G; Leon-Galicia, A; Morales-Gomez, J R
2007-01-01
In this work we include, for the Carnot cycle, irreversibilities of linear finite rate of heat transferences between the heat engine and its reservoirs, heat leak between the reservoirs and internal dissipations of the working fluid. A first optimization of the power output, the efficiency and ecological function of an irreversible Carnot cycle, with respect to: internal temperature ratio, time ratio for the heat exchange and the allocation ratio of the heat exchangers; is performed. For the second and third optimizations, the optimum values for the time ratio and internal temperature ratio are substituted into the equation of power and, then, the optimizations with respect to the cost and effectiveness ratio of the heat exchangers are performed. Finally, a criterion of partial optimization for the class of irreversible Carnot engines is herein presented.
Quantum Coherent Three-Terminal Thermoelectrics: Maximum Efficiency at Given Power Output
Robert S. Whitney
2016-05-01
Full Text Available This work considers the nonlinear scattering theory for three-terminal thermoelectric devices used for power generation or refrigeration. Such systems are quantum phase-coherent versions of a thermocouple, and the theory applies to systems in which interactions can be treated at a mean-field level. It considers an arbitrary three-terminal system in any external magnetic field, including systems with broken time-reversal symmetry, such as chiral thermoelectrics, as well as systems in which the magnetic field plays no role. It is shown that the upper bound on efficiency at given power output is of quantum origin and is stricter than Carnot’s bound. The bound is exactly the same as previously found for two-terminal devices and can be achieved by three-terminal systems with or without broken time-reversal symmetry, i.e., chiral and non-chiral thermoelectrics.
Rizzo, R. E.; Healy, D.; De Siena, L.
2017-02-01
The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in rocks, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture lengths and apertures are fundamental to estimate bulk permeability and therefore fluid flow, especially for rocks with low primary porosity where most of the flow takes place within fractures. We collected outcrop data from a fractured upper Miocene biosiliceous mudstone formation (California, USA), which exhibits seepage of bitumen-rich fluids through the fractures. The dataset was analysed using Maximum Likelihood Estimators to extract the underlying scaling parameters, and we found a log-normal distribution to be the best representative statistic for both fracture lengths and apertures in the study area. By applying Maximum Likelihood Estimators on outcrop fracture data, we generate fracture network models with the same statistical attributes to the ones observed on outcrop, from which we can achieve more robust predictions of bulk permeability.
Gieles, M; Bastian, N; Stein, I; Gieles, Mark; Larsen, Soeren; Bastian, Nate; Stein, Ilaan
2005-01-01
We introduce a method to relate a possible truncation of the star cluster mass function at the high mass end to the shape of the cluster luminosity function (LF). We compare the observed LFs of five galaxies containing young star clusters with synthetic cluster population models with varying initial conditions. The LF of the SMC, the LMC and NGC 5236 are characterized by a power-law behavior NdL~L^-a dL, with a mean exponent of = 2.0 +/- 0.2. This can be explained by a cluster population formeda with a constant cluster formation rate, in which the maximum cluster mass per logarithmic age bin is determined by the size-of-sample effect and therefore increases with log(age/yr). The LFs of NGC 6946 and M51 are better described by a double power-law distribution or a Schechter function. When a cluster population has a mass function that is truncated below the limit given by the size-of-sample effect, the total LF shows a bend at the magnitude of the maximum mass, with the age of the oldest cluster in the populati...
Hapenciuc, C. L.; Borca-Tasciuc, T.; Mihailescu, I. N.
2017-04-01
Thermoelectric materials are used today in thermoelectric devices for heat to electricity(thermoelectric generators-TEG) or electricity to heat(heat pumps) conversion in a large range of applications. In the case of TEGs the final measure of their performance is given by a quantity named the maximum efficiency which shows how much from the heat input is converted into electrical power. Therefore it is of great interest to know correctly how much is the efficiency of a device to can make commercial assessments. The concept of engineering figure of merit, Zeng, and engineering power factor, Peng, were already introduced in the field to quantify the efficiency of a single material under temperature dependent thermoelectric properties, with the mention that the formulas derivation was limited to one leg of the thermoelectric generator. In this paper we propose to extend the concept of engineering figure of merit to a thermoelectric generator by introducing a more general concept of device engineering thermoelectric figure of merit, Zd,eng, which depends on the both TEG materials properties and which shall be the right quantity to be used when we are interested in the evaluation of the efficiency. Also, this work takes into account the electrical contact resistance between the electrodes and thermoelement legs in an attempt to quantify its influence upon the performance of a TEG. Finally, a new formula is proposed for the maximum efficiency of a TEG.
Optimizing WiMAX: Mitigating Co-Channel Interference for Maximum Spectral Efficiency
ABDUL QADIR ANSARI
2016-10-01
Full Text Available The efficient use of radio spectrum is one of the most important issues in wireless networks because spectrum is generally limited and wireless environment is constrained to channel interference. To cope up and for increased usefulness of radio spectrum wireless networks use frequency reuse technique. The frequency reuse technique allows the use of same frequency band in different cells of same network considering inter-cell distance and resulting interference level. WiMAX (Worldwide Interoperability for Microwave Access PHY profile is designed to use FRF (Frequency Reuse Factor of one. When FRF of one is used it results in an improved spectral efficacy but also results in CCI (Co-Channel interference at cell boundaries. The effect of interference is always required to be measured so that some averaging/ minimization techniques may be incorporated to keep the interference level up to some acceptable threshold in wireless environment. In this paper, we have analyzed, that how effectively CCI impact can be mitigated by using different subcarrier permutation types presented in IEEE 802.16 standard. A simulation based analysis is presented wherein impact of using same and different permutation base in adjacent cells in a WiMAX network on CCI, under varying load conditions is analyzed. We have further studied the effect of permutation base in environment where frequency reuse technique is used in conjunction with cell sectoring for better utilization of radio spectrum.
Higuita Cano, Mauricio; Mousli, Mohamed Islam Aniss; Kelouwani, Sousso; Agbossou, Kodjo; Hammoudi, Mhamed; Dubé, Yves
2017-03-01
This work investigates the design and validation of a fuel cell management system (FCMS) which can perform when the fuel cell is at water freezing temperature. This FCMS is based on a new tracking technique with intelligent prediction, which combined the Maximum Efficiency Point Tracking with variable perturbation-current step and the fuzzy logic technique (MEPT-FL). Unlike conventional fuel cell control systems, our proposed FCMS considers the cold-weather conditions, the reduction of fuel cell set-point oscillations. In addition, the FCMS is built to respond quickly and effectively to the variations of electric load. A temperature controller stage is designed in conjunction with the MEPT-FL in order to operate the FC at low-temperature values whilst tracking at the same time the maximum efficiency point. The simulation results have as well experimental validation suggest that propose approach is effective and can achieve an average efficiency improvement up to 8%. The MEPT-FL is validated using a Proton Exchange Membrane Fuel Cell (PEMFC) of 500 W.
Carbonic Anhydrase: An Efficient Enzyme with Possible Global Implications
Christopher D. Boone
2013-01-01
Full Text Available As the global atmospheric emissions of carbon dioxide (CO2 and other greenhouse gases continue to grow to record-setting levels, so do the demands for an efficient and inexpensive carbon sequestration system. Concurrently, the first-world dependence on crude oil and natural gas provokes concerns for long-term availability and emphasizes the need for alternative fuel sources. At the forefront of both of these research areas are a family of enzymes known as the carbonic anhydrases (CAs, which reversibly catalyze the hydration of CO2 into bicarbonate. CAs are among the fastest enzymes known, which have a maximum catalytic efficiency approaching the diffusion limit of 108 M−1s−1. As such, CAs are being utilized in various industrial and research settings to help lower CO2 atmospheric emissions and promote biofuel production. This review will highlight some of the recent accomplishments in these areas along with a discussion on their current limitations.
Amauris Gilbert-Hernández
2016-05-01
Full Text Available A procedure for the selection of maximum pipe thickness to achieve efficient thermal insulation in piping with steam tracing was developed. The bibliographical review allowed identifying the limitations of previous investigations with regard to the selection of pipe thickness in transfer systems with steam tracing. The model for calculating the overall lost heat was prepared. The procedure considers economic criteria for the selection of pipe thickness and established an optimal thickness value which guarantees a total minimum cost by establishing a balance between the expenditures resulting from heat loss and the project costs.
Barth, Aaron M.; Clark, Peter U.; Clark, Jorie; McCabe, A. Marshall; Caffee, Marc
2016-06-01
Reconstructions of the extent and height of the Irish Ice Sheet (IIS) during the Last Glacial Maximum (LGM, ∼19-26 ka) are widely debated, in large part due to limited age constraints on former ice margins and due to uncertainties in the origin of the trimlines. A key area is southwestern Ireland, where various LGM reconstructions range from complete coverage by a contiguous IIS that extends to the continental shelf edge to a separate, more restricted southern-sourced Kerry-Cork Ice Cap (KCIC). We present new 10Be surface exposure ages from two moraines in a cirque basin in the Macgillycuddy's Reeks that provide a unique and unequivocal constraint on ice thickness for this region. Nine 10Be ages from an outer moraine yield a mean age of 24.5 ± 1.4 ka while six ages from an inner moraine yield a mean age of 20.4 ± 1.2 ka. These ages show that the northern flanks of the Macgillycuddy's Reeks were not covered by the IIS or a KCIC since at least 24.5 ± 1.4 ka. If there was more extensive ice coverage over the Macgillycuddy's Reeks during the LGM, it occurred prior to our oldest ages.
van Simaeys, S.; Brinkhuis, H.; Pross, J.; Williams, G. L.; Zachos, J. C.
2004-12-01
Various geochemical and biotic climate proxies, and notably deep-sea benthic foraminiferal δ 18O records indicate that the Eocene 'greenhouse' state of the Earth gradually evolved towards an earliest Oligocene 'icehouse' state, eventually triggering the abrupt appearance of large continental ice-sheets on Antarctic at ˜33.3 Ma (Oi-1 event). This, however, was only the first of two major glacial events in the Oligocene. Benthic foraminiferal δ 18O records show a second positive excursion in the mid Oligocene, consistent with a significant ice-sheet expansion and/or cooling at 27.1 Ma (Oi-2b) coincident with magnetosubchron C9n. Here, we report on a mid Oligocene, globally synchronous, Arctic dinoflagellate migration event, calibrated against the upper half of C9n. A sudden appearance, and abundance increases of the Arctic taxon Svalbardella at lower-middle latitudes coincides with the so-called Oi-2b benthic δ 18O event, dated at ˜27.1 Ma. This phenomenon is taken to indicate significant high-latitude surface water cooling, concomitant Antarctic ice-sheet growth, and sea level lowering. The duration of the Svalbardella migrations, and the episode of profound cooling is estimated as ˜500 ka, and is here termed the Oligocene Glacial Maximum (OGM). Our records suggest a close link between the OGM, sea-level fall, and the classic Rupelian-Chattian boundary, magnetostratigraphically dating this boundary as ˜27.1 Ma.
Cui, Y.; Kump, L.; Diefendorf, A. F.; Freeman, K. H.
2011-12-01
The Paleocene-Eocene Thermal Maximum (PETM; ca. 55.9 Ma) was an interval of geologically abrupt global warming lasting ~200 ka. It has been proposed as an ancient analogue for future climate response to CO2 emission from fossil fuel burning. The onset of this event is fueled by a large release of 13C-depleted carbon into the ocean-atmosphere system. However, there is a large discrepancy in the magnitude of the carbon isotope excursion (CIE) between marine and terrestrial records. Here we present new organic geochemical data and stable carbon isotope records from n-alkanes and pristane extracted from core materials representing the most expanded PETM section yet recovered from a nearshore marine early Cenozoic succession from Spitsbergen. The low hydrogen index and oxygen index indicate that organic matter has been thermally altered, consistent with n-alkanes that do not show a clear odd-over-even predominance as reflected by the low and constant carbon preference index. The δ13C records of long chain n-alkanes from core BH9-05 track the δ13C recorded in total organic carbon, but are ~3% more negative prior to the CIE, ~4.5% more negative during the CIE, and ~4% more negative after the CIE. An orbital age model derived from the same core suggests the CIE from n-alkanes appears more abruptly onset than the bulk organic carbon, indicating possibly climate-induced modification to the observed feature in n-alkanes. In addition, the carbon isotope values of individual long-chain (n-C27 to n-C31) n-alkanes tend to become less negative with increasing chain length resulting in the smallest magnitude CIEs in longer chain lengths (i.e. n-C31) and the largest magnitude CIEs in shorter chain lengths (i.e. n-C27). We are currently considering the effect of plant community and paleoclimate on the observed pattern of CIE in n-alkanes to evaluate carbon cycle perturbations and Arctic hydrology changes during the PETM. One interpretation of these patterns is that there was an
Nimo, Antwi; Grgic, Dario; Reindl, Leonhard M.
2012-04-01
This work presents the optimization of radio frequency (RF) to direct current (DC) circuits using Schottky diodes for remote wireless energy harvesting applications. Since different applications require different wireless RF to DC circuits, RF harvesters are presented for different applications. Analytical parameters influencing the sensitivity and efficiency of the circuits are presented. Results showed in this report are analytical, simulated and measured. The presented circuits operate around the frequency 434 MHz. The result of an L-matched RF to DC circuit operates at a maximum efficiency of 27 % at -35 dBm input. The result of a voltage multiplier achieves an open circuit voltage of 6 V at 0 dBm input. The result of a broadband circuit with a frequency band of 300 MHz, achieves an average efficiency of 5 % at -30 dBm and open circuit voltage of 47 mV. A high quality factor (Q) circuit is also realized with a PI network matching for narrow band applications.
Mehrotra, Shakti; Prakash, O; Khan, Feroz; Kukreja, A K
2013-02-01
KEY MESSAGE : ANN-based combinatorial model is proposed and its efficiency is assessed for the prediction of optimal culture conditions to achieve maximum productivity in a bioprocess in terms of high biomass. A neural network approach is utilized in combination with Hidden Markov concept to assess the optimal values of different environmental factors that result in maximum biomass productivity of cultured tissues after definite culture duration. Five hidden Markov models (HMMs) were derived for five test culture conditions, i.e. pH of liquid growth medium, volume of medium per culture vessel, sucrose concentration (%w/v) in growth medium, nitrate concentration (g/l) in the medium and finally the density of initial inoculum (g fresh weight) per culture vessel and their corresponding fresh weight biomass. The artificial neural network (ANN) model was represented as the function of these five Markov models, and the overall simulation of fresh weight biomass was done with this combinatorial ANN-HMM. The empirical results of Rauwolfia serpentina hairy roots were taken as model and compared with simulated results obtained from pure ANN and ANN-HMMs. The stochastic testing and Cronbach's α-value of pure and combinatorial model revealed more internal consistency and skewed character (0.4635) in histogram of ANN-HMM compared to pure ANN (0.3804). The simulated results for optimal conditions of maximum fresh weight production obtained from ANN-HMM and ANN model closely resemble the experimentally optimized culture conditions based on which highest fresh weight was obtained. However, only 2.99 % deviation from the experimental values could be observed in the values obtained from combinatorial model when compared to the pure ANN model (5.44 %). This comparison showed 45 % better potential of combinatorial model for the prediction of optimal culture conditions for the best growth of hairy root cultures.
WANG Yang; TU Zhan-Chun
2013-01-01
The Carnot-like heat engines are classified into three types (normal-,sub-and,super-dissipative) according to relations between the minimum irreversible entropy production in the "isothermal" processes and the time for completing those processes.The efficiencies at maximum power of normal-,sub-and super-dissipative Carnot-like heat engines are proved to be bounded between ηc/2 and ηc/ (2-ηc),ηc/2 and ηc,0 and ηc/ (2-ηc),respectively.These bounds are also shared by linear,sub-and super-linear irreversible Carnot-like engines [Tu and Wang,Europhys.Lett.98 (2012) 40001] although the dissipative engines and the irreversible ones are inequivalent to each other.
Potvin, Jean; Goldbogen, Jeremy A; Shadwick, Robert E
2012-01-01
Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti) and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae) exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals), the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae), fin (Balaenoptera physalus), blue (Balaenoptera musculus) and minke (Balaenoptera acutorostrata) whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting individual prey
Jean Potvin
Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting
Várnai, Csilla; Burkoff, Nikolas S; Wild, David L
2013-12-10
Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Dukka, Bahadur K C; Akutsu, Tatsuya; Tomita, Etsuji; Seki, Tomokazu; Fujiyama, Asao
2002-01-01
We developed maximum clique-based algorithms for spot matching for two-dimensional gel electrophoresis images, protein structure alignment and protein side-chain packing, where these problems are known to be NP-hard. Algorithms based on direct reductions to the maximum clique can find optimal solutions for instances of size (the number of points or residues) up to 50-150 using a standard PC. We also developed pre-processing techniques to reduce the sizes of graphs. Combined with some heuristics, many realistic instances can be solved approximately.
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik
2016-01-01
. This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...
Potentials and policy implications of energy and material efficiency improvement
Worrell, Ernst; Levine, Mark; Price, Lynn; Martin, Nathan; van den Broek, Richard; Block, Kornelis
1997-01-01
There is a growing awareness of the serious problems associated with the provision of sufficient energy to meet human needs and to fuel economic growth world-wide. This has pointed to the need for energy and material efficiency, which would reduce air, water and thermal pollution, as well as waste production. Increasing energy and material efficiency also have the benefits of increased employment, improved balance of imports and exports, increased security of energy supply, and adopting environmentally advantageous energy supply. A large potential exists for energy savings through energy and material efficiency improvements. Technologies are not now, nor will they be, in the foreseeable future, the limiting factors with regard to continuing energy efficiency improvements. There are serious barriers to energy efficiency improvement, including unwillingness to invest, lack of available and accessible information, economic disincentives and organizational barriers. A wide range of policy instruments, as well as innovative approaches have been tried in some countries in order to achieve the desired energy efficiency approaches. These include: regulation and guidelines; economic instruments and incentives; voluntary agreements and actions, information, education and training; and research, development and demonstration. An area that requires particular attention is that of improved international co-operation to develop policy instruments and technologies to meet the needs of developing countries. Material efficiency has not received the attention that it deserves. Consequently, there is a dearth of data on the qualities and quantities for final consumption, thus, making it difficult to formulate policies. Available data, however, suggest that there is a large potential for improved use of many materials in industrialized countries.
Implications of energy efficiency measures in wheat production
Meyer-Aurich, Andreas; Ziegler, T.; Scholz, L.
The economic and environmental effect of energy saving measures were analyzed for a typical wheat production system in Germany. The introduction of precision farming, reduced nitrogen fertilization and improved crop drying technologies proved to be efficient measures for enhancing energy efficiency...... in wheat production. While the measures precision farming and improved crop drying require investments, reduced fertilizer input can be realized without investments. The environmental effects of all measures are comparable and do not show a clear advantage of one measure against the others. However...
Implications of energy efficiency measures in wheat production
Meyer-Aurich, Andreas; Ziegler, T.; Scholz, L.;
The economic and environmental effect of energy saving measures were analyzed for a typical wheat production system in Germany. The introduction of precision farming, reduced nitrogen fertilization and improved crop drying technologies proved to be efficient measures for enhancing energy efficiency...... in wheat production. While the measures precision farming and improved crop drying require investments, reduced fertilizer input can be realized without investments. The environmental effects of all measures are comparable and do not show a clear advantage of one measure against the others. However......, reduced fertilizer input implies an economic loss which is unlikely to be realized by farmers unless they are forced to do so....
Effort and the Cycle : Cyclical Implications of Efficiency Wages
Uhlig, H.F.H.V.S.; Xu, Y.
1996-01-01
A number of authors have proposed theories of efficiency wages to explain the behaviour of aggregate labor markets. According to these theories, firms do not adjust wages downwards despite available unemployed job seekers, because lower wages would induce hired workers to shirk more often, which in
Implications of Energy Efficiency and Economic Growth in Developing Countries
2012-01-01
It is essential that society shift toward more efficient energy consumption patterns. A sector basis analysis of energy consumption provides some suggestions regarding this view. In the residential sector, energy resources change with the advancement of development stages. The industrial sector is characterized by a diverse range of energy intensity in each subsector. Relevant policies and measures are considered based on the relevant sector information.
Kukush, Alexander; Schneeweiss, Hans
2004-01-01
We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.
Ye, Zhuo-Lin; Li, Wei-Sheng; Lai, Yi-Ming; He, Ji-Zhou; Wang, Jian-Hui
2015-12-01
We propose a quantum-mechanical Brayton engine model that works between two superposed states, employing a single particle confined in an arbitrary power-law trap as the working substance. Applying the superposition principle, we obtain the explicit expressions of the power and efficiency, and find that the efficiency at maximum power is bounded from above by the function: η+ = θ/(θ + 1), with θ being a potential-dependent exponent. Supported by the National Natural Science Foundation of China under Grant Nos. 11505091, 11265010, and 11365015, and the Jiangxi Provincial Natural Science Foundation under Grant No. 20132BAB212009
Radovcich, N. A.; Dreim, D.; Okeefe, D. A.; Linner, L.; Pathak, S. K.; Reaser, J. S.; Richardson, D.; Sweers, J.; Conner, F.
1985-01-01
Work performed in the design of a transport aircraft wing for maximum fuel efficiency is documented with emphasis on design criteria, design methodology, and three design configurations. The design database includes complete finite element model description, sizing data, geometry data, loads data, and inertial data. A design process which satisfies the economics and practical aspects of a real design is illustrated. The cooperative study relationship between the contractor and NASA during the course of the contract is also discussed.
Kaiadi, Mehrzad; Tunestål, Per; Johansson, Bengt
2010-01-01
High EGR rates combined with turbocharging has been identified as a promising way to increase the maximum load and efficiency of heavy duty spark ignition Natural Gas engines. With stoichiometric conditions a three way catalyst can be used which means that regulated emissions can be kept at very low levels. Most of the heavy duty NG engines are diesel engines which are converted for SI operation. These engine's components are in common with the diesel-engine which put limits on higher exh...
Markle, B. R.; Kirby, M.; Carrasco, J.
2008-12-01
Southern California is a densely populated region, highly sensitive to climate change and prone to potentially devastating hydrologic variability (e.g. droughts, floods, etc). In the interest of characterizing past climatic and hydrologic variability, this study analyzes a sediment core from Lake Elsinore, California with a particular focus on a possible rapid regression event at the height of the Last Glacial Maximum (LGM) (between 19,330 and 21,070 calendar yr BP). Sediment analyses (grain size, magnetic susceptibility, and total organic matter) and geochemical analyses (δ13C and molar C/N) are used to characterize and identify this event (hereafter referred to as the Last Glacial Maximum Regression Event or LGMRE). The combination of sediment characteristics of the LGMRE is not observed elsewhere in sediment core LESS02-09 suggesting that the event is unique over the period of observation. This rapid drying event is superimposed on a longer, orbital scale transgressive/regressive cycle. Given the generally wet climate of the LGM, the presence of the LGMRE is unexpected and indicates that Southern California is susceptible to rapid climate change. Evidence suggests synchrony at both orbital and centennial time scales between the Lake Elsinore climate record of the LGM and other terrestrial and marine climate records from southern California as well as the Great Basin region. Furthermore, evidence is presented for synchrony between the Lake Elsinore sediment core and the GISP 2 ice core record from Greenland, at both orbital the centennial time scales, suggesting climatic teleconnections between Southern California and the North Atlantic. It is possible that these two geographically distant areas are linked via dynamics of the altered Last Glacial Maximum jet stream.
Sniegowski, Kristel; Bers, Karolien; Ryckeboer, Jaak; Jaeken, Peter; Spanoghe, Pieter; Springael, Dirk
2012-08-01
Addition of pesticide-primed soil containing adapted pesticide degrading bacteria to the biofilter matrix of on farm biopurification systems (BPS) which treat pesticide contaminated wastewater, has been recommended, in order to ensure rapid establishment of a pesticide degrading microbial community in BPS. However, uncertainties exist about the minimal soil inoculum density needed for successful bioaugmentation of BPS. Therefore, in this study, BPS microcosm experiments were initiated with different linuron primed soil inoculum densities ranging from 0.5 to 50 vol.% and the evolution of the linuron mineralization capacity in the microcosms was monitored during feeding with linuron. Successful establishment of a linuron mineralization community in the BPS microcosms was achieved with all inoculum densities including the 0.5 vol.% density with only minor differences in the time needed to acquire maximum degradation capacity. Moreover, once established, the robustness of the linuron degrading microbial community towards expected stress situations proved to be independent of the initial inoculum density. This study shows that pesticide-primed soil inoculum densities as low as 0.5 vol.% can be used for bioaugmentation of a BPS matrix and further supports the use of BPS for treatment of pesticide-contaminated wastewater at farmyards.
Kimmel, David G.; McGlaughon, Benjamin D.; Leonard, Jeremy; Paerl, Hans W.; Taylor, J. Christopher; Cira, Emily K.; Wetz, Michael S.
2015-05-01
Estuaries often have distinct zones of high chlorophyll a concentrations, known as chlorophyll maximum (CMAX). The persistence of these features is often attributed to physical (mixing and light availability) and chemical (nutrient availability) features, but the role of mesozooplankton grazing is rarely explored. We measured the spatial and temporal variability of the CMAX and mesozooplankton community in the eutrophic Neuse River Estuary, North Carolina. We also conducted grazing experiments to determine the relative impact of mesozooplankton grazing on the CMAX during the phytoplankton growing season (spring through late summer). The CMAX was consistently located upriver of the zone of maximum zooplankton abundance, with an average spatial separation of 18 km. Grazing experiments in the CMAX region revealed negligible effect of mesozooplankton on chlorophyll a during March, and no effect during June or August. These results suggest that the spatial separation of the peak in chlorophyll a concentration and mesozooplankton abundance results in minimal impact of mesozooplankton grazing, contributing to persistence of the CMAX for prolonged time periods. In the Neuse River Estuary, the low mesozooplankton abundance in the CMAX region is attributed to lack of a low salinity tolerant species, predation by the ctenophore Mnemiopsis leidyi, and/or physiologic impacts on mesozooplankton growth rates due to temperature (in the case of low wintertime abundances). The consequences of this lack of overlap result in exacerbation of the effects of eutrophication; namely a lack of trophic transfer to mesozooplankton in this region and the sinking of phytodetritus to the benthos that fuels hypoxia.
de Souza, V.
We describe how the analysis of air showers detected by the Pierre Auger Observatory leads to an accurate determination of the depth of maximum (Xmax). First, the analysis of the air-shower which leads to the reconstruction of Xmax is discussed. The properties of the detector and its measurement biases are treated and carefully taken into consideration. The Xmax results are interpreted in terms of composition, where the interpretation depends mainly on the hadronic interaction models. A global fit of the Xmax distribution yields an estimate of the abundance of four primaries species. The analysis represents the most statistically significant composition information ever obtained for energies above 1017.8 eV. The scenario that emerges shows no support for a strong flux of iron nuclei and a strong energy dependence of the proton fraction.
Ureña-López, L. Arturo; Robles, Victor H.; Matos, T.
2017-08-01
Recent analysis of the rotation curves of a large sample of galaxies with very diverse stellar properties reveals a relation between the radial acceleration purely due to the baryonic matter and the one inferred directly from the observed rotation curves. Assuming the dark matter (DM) exists, this acceleration relation is tantamount to an acceleration relation between DM and baryons. This leads us to a universal maximum acceleration for all halos. Using the latter in DM profiles that predict inner cores implies that the central surface density μDM=ρsrs must be a universal constant, as suggested by previous studies of selected galaxies, revealing a strong correlation between the density ρs and scale rs parameters in each profile. We then explore the consequences of the constancy of μDM in the context of the ultralight scalar field dark matter model (SFDM). We find that for this model μDM=648 M⊙ pc-2 and that the so-called WaveDM soliton profile should be a universal feature of the DM halos. Comparing with the data from the Milky Way and Andromeda satellites, we find that they are all consistent with a boson mass of the scalar field particle of the order of 10-21 eV /c2, which puts the SFDM model in agreement with recent cosmological constraints.
Ramachandran, Hema; Pillai, K. P. P.; Bindu, G. R.
2016-08-01
A two-port network model for a wireless power transfer system taking into account the distributed capacitances using PP network topology with top coupling is developed in this work. The operating and maximum power transfer efficiencies are determined analytically in terms of S-parameters. The system performance predicted by the model is verified with an experiment consisting of a high power home light load of 230 V, 100 W and is tested for two forced resonant frequencies namely, 600 kHz and 1.2 MHz. The experimental results are in close agreement with the proposed model.
Asp, Nils Edvin; Gomes, Vando José Costa; Ogston, Andrea; Borges, José Carlos Corrêa; Nittrouer, Charles Albert
2016-02-01
The tide-dominated eastern sector of the Brazilian Amazonian coast includes large mangrove areas and several estuaries, including the estuary associated with the Urumajó River. There, the dynamics of suspended sediments and delivery mechanisms for mud to the tidal flats and mangroves are complex and were investigated in this study. Four longitudinal measuring campaigns were carried out, encompassing spring/neap tides and dry/rainy seasons. During spring tides, water levels were measured simultaneously at 5 points along the estuary. Currents, salinity, and suspended sediment concentrations (SSCs) were measured over the tidal cycle in a cross section at the middle sector of the estuary. Results show a marked turbidity maximum zone (TMZ) during the rainy season, with a 4-km upstream displacement from neap to spring tide. During dry season, the TMZ was conspicuous only during neap tide and dislocated about 5 km upstream and was substantially less apparent in comparison to that observed during rainy season. The results show that mud is being concentrated in the channel associated with the TMZ especially during the rainy season. At this time, a substantial amount of the mud is washed out from mangroves to the estuarine channel and hydrodynamic/salinity conditions for TMZ formation are optimal. As expected, transport to the mangrove flats is most effective during spring tide and substantially reduced at neap tide, when mangroves are not being flooded. During the dry season, mud is resuspended from the bed in the TMZ sector and is a source of sediment delivered to the tidal flats and mangroves. The seasonal variation of the sediments on the seabed is in agreement with the variation of suspended sediments as well.
M. Girotto
2012-06-01
Full Text Available Esta pesquisa teve como objetivo avaliar a velocidade e intensidade de ação do hexazinone isolado e em mistura com outros inibidores do fotossistema II, através da eficiência fotossintética de Panicum maximum em pós-emergência. O ensaio foi constituído de seis tratamentos: hexazinone (250 g ha-1, tebuthiuron (1,0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1 + 0,5 kg ha-1, diuron (2.400 g ha-1, hexazinone + diuron (125 + 1.200 g ha-1, metribuzin (1.440 g ha-1, hexazinone + metribuzin (125 + 720 g ha-1 e uma testemunha. O experimento foi instalado em delineamento inteiramente casualizado, com quatro repetições. Após a aplicação dos tratamentos, as plantas foram transportadas para casa de vegetação sob condições controladas de temperatura e umidade, onde ficaram durante o período experimental, sendo realizadas as seguintes avaliações: taxa de transporte de elétrons e análise visual de intoxicação. A avaliação com o fluorômetro foi realizada nos intervalos de 1, 2, 6, 24, 48, 72, 120 e 168 horas após a aplicação, e as avaliações visuais, aos três e sete dias após a aplicação. Os resultados demonstraram diferença nos tratamentos, enfatizando a aplicação do diuron, que reduziu lentamente o transporte de elétrons comparado com os outros herbicidas e, em mistura com hexazinone, apresentou efeito sinérgico. Verificou-se com o uso do fluorômetro a intoxicação antecipada em plantas de P. maximum após a aplicação de herbicidas inibidores do fotossistema II de forma isolada e em mistura.This work aimed to evaluate the speed and intensity of action of hexazinone applied alone and in combination with other photo-system II inhibitors on the photosynthetic efficiency of Panicum maximum in post-emergence. The assay consisted of six treatments: hexazinone (250 g ha-1, tebuthiuron (1.0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1+ 0.5 kg ha-1, diuron (2,400 g ha-1, hexazinone + diuron (125 + 1,200 g ha-1, metribuzin
Basko, M. M.
2016-08-01
Theoretical investigation has been performed on the conversion efficiency (CE) into the 13.5-nm extreme ultraviolet (EUV) radiation in a scheme where spherical microspheres of tin (Sn) are simultaneously irradiated by two laser pulses with substantially different wavelengths. The low-intensity short-wavelength pulse is used to control the rate of mass ablation and the size of the EUV source, while the high-intensity long-wavelength pulse provides efficient generation of the EUV light at λ=13.5 nm. The problem of full optimization for maximizing the CE is formulated and solved numerically by performing two-dimensional radiation-hydrodynamics simulations with the RALEF-2D code under the conditions of steady-state laser illumination. It is shown that, within the implemented theoretical model, steady-state CE values approaching 9% are feasible; in a transient peak, the maximum instantaneous CE of 11.5% was calculated for the optimized laser-target configuration. The physical factors, bringing down the fully optimized steady-state CE to about one half of the absolute theoretical maximum of CE≈20 % for the uniform static Sn plasma, are analyzed in detail.
Implications of building energy standard for sustainable energy efficient design in buildings
Iwaro, Joseph; Mwasha, Abraham [University of West Indies, W. Department of Civil and Environmental Engineering, St. Augustine Campus (Trinidad and Tobago)
2010-07-01
The rapid growth of energy use, worldwide, has raised concerns over problems of energy supply, energy sustainability and exhaustion of energy resources. While most of the developed countries are implementing building energy standard rapidly to reduce building energy consumption and moving aggressively to achieve sustainable energy efficient building; the position of developing countries respect to energy standard implementation for this purpose is either poorly documented or not documented at all. Presently, there exists a gap between existing building designs and the increasing demand for sustainable energy efficient building design in developing countries. In that respect, this paper investigates the implementation status of building energy standards in developing countries and its implications for sustainable energy efficient designs in building. The present implementation status of building energy standard in 60 developing countries around the world, were analyzed using online survey. Hence, this study revealed the present implementation status of building energy standards in developing countries, implications for sustainable energy efficient designs in building and how building energy standards can be used to fill the gap between existing building designs and increasing demand for sustainable energy efficient building.
Koyama, Shinsuke; Paninski, Liam
2010-08-01
A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton-Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.
Ogawa, Akira; Anzou, Hideki; Yamamoto, So; Shimagaki, Mituru
2015-11-01
In order to control the maximum tangential velocity Vθm(m/s) of the turbulent rotational air flow and the collection efficiency ηc (%) using the fly ash of the mean diameter XR50=5.57 µm, two secondary jet nozzles were installed to the body of the axial flow cyclone dust collector with the body diameter D1=99mm. Then in order to estimate Vθm (m/s), the conservation theory of the angular momentum flux with Ogawa combined vortex model was applied. The comparisons of the estimated results of Vθm(m/s) with the measured results by the cylindrical Pitot-tube were shown in good agreement. And also the estimated collection efficiencies ηcth (%) basing upon the cut-size Xc (µm) which was calculated by using the estimated Vθ m(m/s) and also the particle size distribution R(Xp) were shown a little higher values than the experimental results due to the re-entrainment of the collected dust. The best method for adjustment of ηc (%) related to the contribution of the secondary jet flow is principally to apply the centrifugal effect Φc (1). Above stated results are described in detail.
Daisuke Ichinose
2013-03-01
Full Text Available This paper measures the productive efficiency of municipal solid waste (MSW logistics by applying data envelopment analysis (DEA to cross-sectional data of prefectures in Japan. Either through public operations or by outsourcing to private waste collection operators, prefectural governments possess the fundamental authority over waste processing operations in Japan. Therefore, we estimate a multi-input multi-output production efficiency at the prefectural level via DEA, employing several different model settings. Our data classify the MSW into household solid waste (HSW and business solid waste (BSW collected by both private and public operators as separate outputs, while the numbers of trucks and workers used by private and public operators are used as inputs. The results consistently show that geographical characteristics, such as the number of inhabited remote islands, are relatively more dominant factors for determining inefficiency. While the implication that a minimum efficient scale is not achieved in these small islands is in line with the literature suggesting that waste logistics has increasing returns at the municipal level, our results indicate that waste collection efficiency in Japan is well described by CRS technology at the prefectural level. The results also show that prefectures with higher private-sector participation, measured in terms of HSW collection, are more efficient, whereas a higher private–labor ratio negatively affects efficiency. We also provide evidence that prefectures with inefficient MSW logistics have a higher tendency of suffering from the illegal dumping of industrial waste.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
唐治德; 徐阳阳; 赵茂; 彭一灵
2015-01-01
By applying lumped parameter circuit theory and coupled mode theory, the efficiency of wire-less power transfer system via magnetic resonant coupling was researched, and the concept of transfer effi-ciency maximum frequency was proposed when transfer efficiency is maximum. Influence of system pa-rameters and load on transfer efficiency maximum frequency and transfer efficiency were analyzed. Two coils transfer system was set up, and the relationship between the frequency and transfer efficiency, the relationship between load and transfer efficiency maximum frequency and between load and transfer effi-ciency were studied,and the relationship between distance and transfer efficiency maximum frequency and between distance and transfer efficiency were carried out. Experiments and simulation prove that: there is a transfer efficiency maximum frequency in wireless power transfer system; and this transfer efficiency maximum frequency is proportional to the load and inversely proportional to mutual inductance approxi-mately; transfer efficiency maximum frequency increases with the increase of distance; when the system work in transfer efficiency maximum frequency and the load resistance is much greater than the coil resist-ance, the transfer efficiency of wireless power transfer system is maximum.%应用集总参数和耦合模理论，研究了电磁耦合式无线电能传输系统的传输效率问题，提出了使无线电能传输系统传输效率最大的传输效率最佳频率概念，分析了传输系统参数和负载对传输效率最佳频率和传输效率的影响。制作了两线圈无线电能传输实验电路，并进行了谐振频率与传输效率的关系，负载与传输效率最佳频率及传输效率的关系，距离与传输效率最佳频率及传输效率的关系实验和仿真分析。实验和仿真分析证明了：无线电能传输系统有一个传输效率最佳频率；传输效率最佳频率近似与负载成正比，与线圈
de Roest, K; Montanari, C; Fowler, T; Baltussen, W
2009-11-01
This paper presents an analysis of the economic implications of alternative methods to surgical castration without anaesthesia. Detailed research results on the economic implications of four different alternatives are reported: castration with local anaesthesia, castration with general anaesthesia, immunocastration and raising entire males. The first three alternatives have been assessed for their impact on pig production costs in the most important pig-producing Member States of the EU. The findings on castration with anaesthesia show that cost differences among farms increase if the anaesthesia cannot be administered by farmers and when the veterinarian has to be called to perform it. The cost of veterinarian service largely affects the total average costs, making this solution economically less feasible in small-scale pig farms. In all other farms, the impact on production costs of local anaesthesia is however limited and does not exceed 1 €ct per kg. General anaesthesia administered by inhalation or injection of Ketamin in combination with a sedative (Azaperone, Midazolan) is more expensive. These costs depend heavily on farm size, as the inhalation equipment has to be depreciated on the largest number of pigs possible. The overall costs of immunocastration - including the cost of the work load for the farmer - has to be evaluated against the potential benefits derived from higher daily weight gain and feed efficiency in comparison with surgical castrates. The economic feasibility of this practice will finally depend on the price of the vaccine and on consumer acceptance of immunocastration. The improvement in feed efficiency may compensate almost entirely for the cost of vaccination. The main advantages linked to raising entire males are due to the higher efficiency of feed conversion, to the better growth rate and to the higher leanness of carcass. A higher risk of boar taint on the slaughter line has to be accounted for. Raising entire males should not
Sluijs, A.; van Roij, L.; Harrington, G. J.; Schouten, S.; Sessa, J. A.; LeVay, L. J.; Reichart, G.-J.; Slomp, C. P.
2014-07-01
The Paleocene-Eocene Thermal Maximum (PETM, ~ 56 Ma) was a ~ 200 kyr episode of global warming, associated with massive injections of 13C-depleted carbon into the ocean-atmosphere system. Although climate change during the PETM is relatively well constrained, effects on marine oxygen concentrations and nutrient cycling remain largely unclear. We identify the PETM in a sediment core from the US margin of the Gulf of Mexico. Biomarker-based paleotemperature proxies (methylation of branched tetraether-cyclization of branched tetraether (MBT-CBT) and TEX86) indicate that continental air and sea surface temperatures warmed from 27-29 to ~ 35 °C, although variations in the relative abundances of terrestrial and marine biomarkers may have influenced these estimates. Vegetation changes, as recorded from pollen assemblages, support this warming. The PETM is bracketed by two unconformities. It overlies Paleocene silt- and mudstones and is rich in angular (thus in situ produced; autochthonous) glauconite grains, which indicate sedimentary condensation. A drop in the relative abundance of terrestrial organic matter and changes in the dinoflagellate cyst assemblages suggest that rising sea level shifted the deposition of terrigenous material landward. This is consistent with previous findings of eustatic sea level rise during the PETM. Regionally, the attribution of the glauconite-rich unit to the PETM implicates the dating of a primate fossil, argued to represent the oldest North American specimen on record. The biomarker isorenieratene within the PETM indicates that euxinic photic zone conditions developed, likely seasonally, along the Gulf Coastal Plain. A global data compilation indicates that O2 concentrations dropped in all ocean basins in response to warming, hydrological change, and carbon cycle feedbacks. This culminated in (seasonal) anoxia along many continental margins, analogous to modern trends. Seafloor deoxygenation and widespread (seasonal) anoxia likely
A general theory of evolution based on energy efficiency: its implications for diseases.
Yun, Anthony J; Lee, Patrick Y; Doux, John D; Conley, Buford R
2006-01-01
We propose a general theory of evolution based on energy efficiency. Life represents an emergent property of energy. The earth receives energy from cosmic sources such as the sun. Biologic life can be characterized by the conversion of available energy into complex systems. Direct energy converters such as photosynthetic microorganisms and plants transform light energy into high-energy phosphate bonds that fuel biochemical work. Indirect converters such as herbivores and carnivores predominantly feed off the food chain supplied by these direct converters. Improving energy efficiency confers competitive advantage in the contest among organisms for energy. We introduce a term, return on energy (ROE), as a measure of energy efficiency. We define ROE as a ratio of the amount of energy acquired by a system to the amount of energy consumed to generate that gain. Life-death cycling represents a tactic to sample the environment for innovations that allow increases in ROE to develop over generations rather than an individual lifespan. However, the variation-selection strategem of Darwinian evolution may define a particular tactic rather than an overarching biological paradigm. A theory of evolution based on competition for energy and driven by improvements in ROE both encompasses prior notions of evolution and portends post-Darwinian mechanisms. Such processes may involve the exchange of non-genetic traits that improve ROE, as exemplified by cognitive adaptations or memes. Under these circumstances, indefinite persistence may become favored over life-death cycling, as increases in ROE may then occur more efficiently within a single lifespan rather than over multiple generations. The key to this transition may involve novel methods to address the promotion of health and cognitive plasticity. We describe the implications of this theory for human diseases.
Lunau, Mirko; Voss, Maren; Erickson, Matthew; Dziallas, Claudia; Casciotti, Karen; Ducklow, Hugh
2013-05-01
Terrestrial ecosystems are becoming increasingly nitrogen-saturated due to anthropogenic activities, such as agricultural loading with artificial fertilizer. Thus, more and more reactive nitrogen is entering streams and rivers, primarily as nitrate, where it is eventually transported towards the coastal zone. The assimilation of nitrate by coastal phytoplankton and its conversion into organic matter is an important feature of the aquatic nitrogen cycle. Dissolved reactive nitrogen is converted into a particulate form, which eventually undergoes nitrogen removal via microbial denitrification. High and unbalanced nitrate loads to the coastal zone may alter planktonic nitrate assimilation efficiency, due to the narrow stochiometric requirements for nutrients typically shown by these organisms. This implies a cascade of changes for the cycling of other elements, such as carbon, with unknown consequences at the ecosystem level. Here, we report that the nitrate removal efficiency (NRE) of a natural phytoplankton community decreased under high, unbalanced nitrate loads, due to the enhanced recycling of organic nitrogen and subsequent production and microbial transformation of excess ammonium. NRE was inversely correlated with the amount of nitrate present, and mechanistically controlled by dissolved organic nitrogen (DON), and organic carbon (Corg) availability. These findings have important implications for the management of nutrient runoff to coastal zones.
Roest, de K.; Montanari, C.; Fowler, T.; Baltussen, W.H.M.
2009-01-01
This paper presents an analysis of the economic implications of alternative methods to surgical castration without anaesthesia. Detailed research results on the economic implications of four different alternatives are reported. castration with local anaesthesia, castration with general anaesthesia,
Smit CE; Wezel AP van; Jager T; Traas TP; CSR
2000-01-01
The impact of secondary poisoning on the Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) of cadmium, copper and mercury in water, sediment and soil have been evaluated. Field data on accumulation of these elements by fish, mussels and earthworms were used to derive MPC
Smit CE; Wezel AP van; Jager T; Traas TP; CSR
2000-01-01
The impact of secondary poisoning on the Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) of cadmium, copper and mercury in water, sediment and soil have been evaluated. Field data on accumulation of these elements by fish, mussels and earthworms were used to derive
Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.
2014-01-01
The Paleocene–Eocene Thermal Maximum (PETM, ~ 56 Ma) was a ~ 200 kyr episode of global warming, associated with massive injections of 13C-depleted carbon into the ocean–atmosphere system. Although climate change during the PETM is relatively well constrained, effects on marine oxygen concentrations
Sluijs, A.; van Roij, L.; Harrington, G.J.; Schouten, S.; Sessa, J.A.; LeVay, L.J.; Reichart, G.-J.; Slomp, C.P.
2014-01-01
The Paleocene–Eocene Thermal Maximum(PETM, ?56 Ma) was a ?200 kyr episode of globalwarming, associated with massive injections of 13C-depletedcarbon into the ocean–atmosphere system. Although climatechange during the PETM is relatively well constrained,effects on marine oxygen concentrations and nut
Arbones, B.; Figueiras, F. G.; Varela, R.
2000-09-01
Spectral and non-spectral measurements of the maximum quantum yield of carbon fixation for natural phytoplankton assemblages were compared in order to evaluate their effect on bio-optical models of primary production. Field samples were collected from two different coastal regions of NW Spain in spring, summer and autumn and in a polar environment (Gerlache Strait, Antarctica) during the austral summer. Concurrent determinations were made of spectral phytoplankton absorption coefficient [ aph( λ)], white-light-limited slope of the photosynthesis-irradiance relationships ( αB), carbon uptake action spectra [ αB( λ)], broad-band maximum quantum yields ( φm), and spectral maximum quantum yields [ φm( λ)]. Carbon uptake action spectra roughly followed the shape of the corresponding phytoplankton absorption spectra but with a slight displacement in the blue-green region that could be attributed to imbalance between the two photosystems PS I and PS II. Results also confirmed previous observations of wavelength dependency of maximum quantum yield. The broad-band maximum quantum yield ( φm) calculated considering the measured spectral phytoplankton absorption coefficient and the spectrum of the light source of the incubators was not significantly different form the averaged spectral maximum quantum yield [ overlineφ max(λ) ] ( t-test for paired samples, P=0.34). These results suggest that maximum quantum yield can be estimated with enough accuracy from white-light P- E curves and measured phytoplankton absorption spectra. Primary production at light limiting regimes was compared using four different models with a varying degree of spectral complexity. No significant differences ( t-test for paired samples, P=0.91) were found between a spectral model based on the carbon uptake action spectra [ αB( λ) — model a] and a model which uses the broad-band φm and measured aph( λ) (model b). In addition, primary production derived from constructed action spectra [ ac
Regional differences in Chinese SO2 emission control efficiency and policy implications
Zhang, Q. Q.; Wang, Y.; Ma, Q.; Yao, Y.; Xie, Y.; He, K.
2015-06-01
SO2 emission control has been one of the most important air pollution policies in China since 2000. In this study, we assess regional differences in SO2 emission control efficiencies in China through the modeling analysis of four scenarios of SO2 emissions, all of which aim to reduce the national total SO2 emissions by 8% or 2.3 Tg below the 2010 emissions level, the target set by the current twelfth Five-Year Plan (FYP; 2011-2015), but differ in spatial implementation. The GEOS-Chem chemical transport model is used to evaluate the efficiency of each scenario on the basis of four impact metrics: surface SO2 and sulfate concentrations, population-weighted sulfate concentration (PWC), and sulfur export flux from China to the western Pacific. The efficiency of SO2 control (β) is defined as the relative change of each impact metric to a 1% reduction in SO2 emissions from the 2010 baseline. The S1 scenario, which adopts a spatially uniform reduction in SO2 emissions in China, gives a β of 0.99, 0.71, 0.83, and 0.67 for SO2 and sulfate concentrations, PWC, and export flux, respectively. By comparison, the S2 scenario, which implements all the SO2 emissions reduction over North China (NC), is found most effective in reducing national mean surface SO2 and sulfate concentrations and sulfur export fluxes, with β being 1.0, 0.76, and 0.95 respectively. The S3 scenario of implementing all the SO2 emission reduction over South China (SC) has the highest β in reducing PWC (β = 0.98) because SC has the highest correlation between population density and sulfate concentration. Reducing SO2 emissions over Southwest China (SWC) is found to be least efficient on the national scale, albeit with large benefits within the region. The difference in β by scenario is attributable to the regional difference in SO2 oxidation pathways and the source-receptor relationship. Among the three regions examined here, NC shows the largest proportion of sulfate formation through gas
Denisov, S. L.; Korolkov, A. I.
2017-07-01
A study of the phenomenon of diffraction of acoustic waves in application to the task of noise shielding by the method of maximum length sequences has been carried out. Rectangular plates and an aircraft model of integrated layout are used as the screens. In the study of noise shielding by aircraft model, the theorem of reciprocity is used. A comparison of experimental results with calculations performed in the framework of the geometrical theory of diffraction (GTD) is performed. On the basis of calculations, the identification of the contributions from different areas of the shielding surface in the full acoustic field is carried out. For the aircraft model, the shielding factor is calculated depending on the frequency.
Sauer, T. [ebm-papst Mulfingen GmbH und Co. KG, Mulfingen (Germany)
2006-03-15
Blowers are often powered by rotary-current asynchronous motors with short-circuit rotors, which are robust, simple and reliable. Today, specifications have become more demanding. For example, economic efficiency and low noise - combined with speed control which again should be as simple as possible - are now required. Asynchronous motors are hardly capable of meeting these requirements, so they are being replaced in many applications by electronically commuted permanent magnet motors, so-called EC drives. (orig.)
Croft, Gregory Donald
There are two commonly-used approaches to modeling the future supply of mineral resources. One is to estimate reserves and compare the result to extraction rates, and the other is to project from historical time series of extraction rates. Perceptions of abundant oil supplies in the Middle East and abundant coal supplies in the United States are based on the former approach. In both of these cases, an approach based on historical production series results in a much smaller resource estimate than aggregate reserve numbers. This difference is not systematic; natural gas production in the United States shows a strong increasing trend even though modest reserve estimates have resulted in three decades of worry about the gas supply. The implication of a future decline in Middle East oil production is that the market for transportation fuels is facing major changes, and that alternative fuels should be analyzed in this light. Because the U.S. holds very large coal reserves, synthesizing liquid hydrocarbons from coal has been suggested as an alternative fuel supply. To assess the potential of this process, one has to look at both the resource base and the net efficiency. The three states with the largest coal production declines in the 1996 to 2006 period are among the top 5 coal reserve holders, suggesting that gross coal reserves are a poor indicator of future production. Of the three categories of coal reserves reported by the U.S. Energy Information Administration, reserves at existing mines is the narrowest category and is approximately the equivalent of proved developed oil reserves. By this measure, Wyoming has the largest coal reserves in the U.S., and it accounted for all of U.S. coal production growth over the 1996 to 2006 time period. In Chapter 2, multi-cycle Hubbert curve analysis of historical data of coal production from 1850 to 2007 demonstrates that U.S. anthracite and bituminous coal are past their production peak. This result contradicts estimates based
Morillon Galvez, David [Comision Nacional para el Ahorro de Energia, Mexico, D. F. (Mexico)
1999-07-01
An analysis of the elements and factors that the architecture of buildings must have to be sustainable, such as: a design adequate to the environment, saving and efficient use of alternate energies, and the auto-supply is presented. In addition a methodology for the natural air conditioning (bioclimatic architecture) of buildings, as well as ideas for the saving and efficient use of energy, with the objective of contributing to the adequate use of components of the building (walls, ceilings, floors etc.), is presented, that when interacting with the environment it takes advantage of it, without deterioration of the same, obtaining energy efficient designs. [Spanish] Se presenta un analisis de los elementos y factores que debe tener la arquitectura de edificios para ser sustentable, como; un diseno adecuado al ambiente, ahorro y uso eficiente de la energia, el uso de energias alternas y el autoabastecimiento. Ademas se propone una metodologia para la climatizacion natural (arquitectura bioclimatica) de edificios, asi como ideas para el ahorro y uso eficiente de energia, con el objetivo de aportar al uso adecuado de componentes del edificio (muros, techos, pisos etc.) que al interactuar con el ambiente tome ventaja de el, sin deterioro del mismo, logrando disenos energeticamente eficientes.
Bjertnæs, Geir Haakon
2005-01-01
The desirability for production efficiency is re-examined in this study, where agents choose occupation based on lifetime income net of tuition costs. Efficient revenue raising implies that the government should trade off efficiency in production for efficiency in intertemporal consumption, as capital income is taxed in optimum. The subsequent wage difference between high- and low-skilled occupations is increased compared to a production efficient outcome, which is in contrast to previous res...
Weinigel, Martin; Kellner, Albert L; Price, Jeffrey H
2009-12-01
Image-based autofocus determines focus directly from the specimen (as opposed to reflective surface positioning with an offset), but sequential acquisition of a stack of images to measure resolution/sharpness and find best focus is slower than reflective positioning. Simultaneous imaging of multiple focal planes, which is also useful for 3D imaging of live cells, is faster but requires complicated optics. With color CCD cameras and white light sources commonly available, we asked if axial chromatic aberration can be utilized to acquire multiple focal planes simultaneously, and if it can be controlled through a range sufficient for practical use. For proof of concept, we theoretically and experimentally explored the focal differences between three narrow wavelength bands on a 3-chip color CCD camera with and without glass inserts of various thicknesses and dispersions. Ray tracing yielded changes in foci of 0.65-0.9 microm upon insertion of 12.5-mm thick glass samples for green (G, 522 nm) vs. blue (B, 462 nm) and green vs. red (G-R, 604 nm). On a microscope: (1) With no glass inserts, the differences in foci were 2.15 microm (G-B) and 0.43 microm (G-R); (2) With glass inserts, the maximum change in foci for G vs. B was 0.44 microm and for G vs. R was 0.26 microm; and (3) An 11.3 mm thick N-BK7 glass insert shifted the foci 0.9 microm (R), 0.6 microm (G), and 0.35 microm (B), such that the B and R foci were farther apart (2.1 microm vs. 1.7 microm) and the R and G foci were closer together (0.25 microm vs. 0.45 microm). The slopes of the differences in foci were dependent on thickness, index of refraction, and dispersion. The measured differences in foci are comparable to the axial steps of 0.1-0.24 microm commonly used for autofocus, and focal plane separation can be altered by inserting optical elements of various dispersions and thicknesses. By enabling acquisition of multiple, axially offset images simultaneously, chromatic aberration, normally an imaging pariah
I.P. van Staveren (Irene)
2009-01-01
textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm
Costa, Rui J.; Wilkinson-Herbots, Hilde
2017-01-01
The isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, it has been reported that the parameter estimates obtained by fitting the IM model are very sensitive to the model’s assumptions—including the assumption of constant gene flow until the present. This article is concerned with the isolation-with-initial-migration (IIM) model, which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used, by means of likelihood-ratio tests, to distinguish between alternative models representing the following divergence scenarios: (a) divergence with potentially asymmetric gene flow until the present, (b) divergence with potentially asymmetric gene flow until some point in the past and in isolation since then, and (c) divergence in complete isolation. We illustrate the procedure on pairs of Drosophila sequences from ∼30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this article. PMID:28193727
Thanassoulas, C
2008-01-01
The (any) seismogenic area in the lithosphere is considered as an open physical system. Following its energy balance analysis earlier presented (Part - I, Thanassoulas, 2008), the specific case when the seismogenic area is under normal (input energy equals released energy) seismogenic conditions is studied. In this case the cumulative seismic energy release is a linear time function. Starting from this linear function a method is postulated for the determination of the maximum expected magnitude of a future earthquake. The proposed method has been tested "a posteriori" on real EQs from the Greek territory, USA and data obtained from the seismological literature. The obtained results validate the methodology while an analysis is presented that justifies the obtained high degree of accuracy compared to the corresponding calculated EQ magnitudes with seismological methods.
Bromley, Gordon R. M.; Schaefer, Joerg M.; Hall, Brenda L.; Rademaker, Kurt M.; Putnam, Aaron E.; Todd, Claire E.; Hegland, Matthew; Winckler, Gisela; Jackson, Margaret S.; Strand, Peter D.
2016-09-01
Resolving patterns of tropical climate variability during and since the last glacial maximum (LGM) is fundamental to assessing the role of the tropics in global change, both on ice-age and sub-millennial timescales. Here, we present a10Be moraine chronology from the Cordillera Carabaya (14.3°S), a sub-range of the Cordillera Oriental in southern Peru, covering the LGM and the first half of the last glacial termination. Additionally, we recalculate existing 10Be ages using a new tropical high-altitude production rate in order to put our record into broader spatial context. Our results indicate that glaciers deposited a series of moraines during marine isotope stage 2, broadly synchronous with global glacier maxima, but that maximum glacier extent may have occurred prior to stage 2. Thereafter, atmospheric warming drove widespread deglaciation of the Cordillera Carabaya. A subsequent glacier resurgence culminated at ∼16,100 yrs, followed by a second period of glacier recession. Together, the observed deglaciation corresponds to Heinrich Stadial 1 (HS1: ∼18,000-14,600 yrs), during which pluvial lakes on the adjacent Peruvian-Bolivian altiplano rose to their highest levels of the late Pleistocene as a consequence of southward displacement of the inter-tropical convergence zone and intensification of the South American summer monsoon. Deglaciation in the Cordillera Carabaya also coincided with the retreat of higher-latitude mountain glaciers in the Southern Hemisphere. Our findings suggest that HS1 was characterised by atmospheric warming and indicate that deglaciation of the southern Peruvian Andes was driven by rising temperatures, despite increased precipitation. Recalculated 10Be data from other tropical Andean sites support this model. Finally, we suggest that the broadly uniform response during the LGM and termination of the glaciers examined here involved equatorial Pacific sea-surface temperature anomalies and propose a framework for testing the viability
Feranec, Robert S.; Kozlowski, Andrew L.
2016-03-01
To understand what factors control species colonization and extirpation within specific paleoecosystems, we analyzed radiocarbon dates of megafaunal mammal species from New York State after the Last Glacial Maximum. We hypothesized that the timing of colonization and extirpation were both driven by access to preferred habitat types. Bayesian calibration of a database of 39 radiocarbon dates shows that caribou (Rangifer tarandus) were the first colonizers, then mammoth (Mammuthus sp.), and finally American mastodon (Mammut americanum). The timing of colonization cannot reject the hypothesis that colonizing megafauna tracked preferred habitats, as caribou and mammoth arrived when tundra was present, while mastodon arrived after boreal forest was prominent in the state. The timing of caribou colonization implies that ecosystems were developed in the state prior to 16,000 cal yr BP. The contemporaneous arrival of American mastodon with Sporormiella spore decline suggests the dung fungus spore is not an adequate indicator of American mastodon population size. The pattern in the timing of extirpation is opposite to that of colonization. The lack of environmental changes suspected to be ecologically detrimental to American mastodon and mammoth coupled with the arrival of humans shortly before extirpation suggests an anthropogenic cause in the loss of the analyzed species.
楚双霞; 刘林华
2011-01-01
There are five typical approximate formulae of maximum conversion efficiency, which are often used for the second law analysis of the utilization of terrestrial solar radiation. Based on Candau's definition of radiative exergy and solar spectral radiation databank developed by Gueymard, the maximum conversion efficiencies (exergy-to-energy ratio) of terrestrial solar radiation under different air mass and tilt angle were obtained and taken as benchmark solution. The accuracies of these five typical approximate formulae of maximum conversion efficiency were compared and analyzed under different atmospheric condition and tilt angle. The results show that, for maximum conversion efficiency of terrestrial solar radiation, the approximate formulae that proposed by Petela, Spanner, Parrot and Jeter overestimates, while that proposed by Badescu underestimates largely. Atmospheric condition heavily affects maximum conversion efficiency of terrestrial solar radiation. The influence of atmospheric condition should be taken into account on the exact computation of maximum conversion efficiency of terrestrial solar radiation for the second law analysis of solar energy conversion systems.%在对应用地表太阳辐射的系统进行热力学第二定律分析时,经常采用5个典型的太阳辐射最大转化效率计算公式.在Candau给出的辐射(火用)的定义和Gueymard公布的太阳光谱辐射数据的基础上,该文首先获得了不同大气条件和接收面下地表太阳辐射的最大转化效率(火用)和能间比值),并将其作为基准数据,比较和分析了不同大气条件和接收面下由5个典型公式计算得到的地表太阳辐射最大转化效率的精度.结果表明由Petela、Spanner、Parrot和Jeter提出的公式的计算结果高估了地表太阳辐射的最大转化效率,而由Badescu提出的公式计算得到的结果远远低估了地表太阳辐射的最大转化效率.大气条件对地表太阳辐射最大转化效率
Makarova, Maria; Wright, James D.; Miller, Kenneth G.; Babila, Tali L.; Rosenthal, Yair; Park, Jill I.
2017-01-01
We present new δ13C and δ18O records of surface (Morozovella and Acarinina) and thermocline dwelling (Subbotina) planktonic foraminifera and benthic foraminifera (Gavelinella, Cibicidoides, and Anomalinoides) during the Paleocene-Eocene Thermal Maximum (PETM) from Millville, New Jersey, and compare them with three other sites located along a paleoshelf transect from the U.S. mid-Atlantic coastal plain. Our analyses show different isotopic responses during the PETM in surface versus thermocline and benthic species. Whereas all taxa record a 3.6-4.0‰ δ13C decrease associated with the carbon isotope excursion, thermocline dwellers and benthic foraminifera show larger δ18O decreases compared to surface dwellers. We consider two scenarios that can explain the observed isotopic records: (1) a change in the water column structure and (2) a change in habitat or calcification season of the surface dwellers due to environmental stress (e.g., warming, ocean acidification, surface freshening, and/or eutrophication). In the first scenario, persistent warming during the PETM would have propagated heat into deeper layers and created a more homogenous water column with a thicker warm mixed layer and deeper, more gradual thermocline. We attribute the hydrographic change to decreased meridional thermal gradients, consistent with models that predict polar amplification. The second scenario assumes that environmental change was greater in the mixed layer forcing surface dwellers to descend into thermocline waters as a refuge or restrict their calcification to the colder seasons. Although both scenarios are plausible, similar δ13C responses recorded in surface, thermocline, and benthic foraminifera challenge mixed layer taxa migration.
Collatz, G.J. [NASA/GSFC, Greenbelt, MD (United States); Clark, J.S. [Duke Univ., Durham, NC (United States); Berry, J.A. [Carnegie Institution of Washington, Stanford, CA (United States)
1995-06-01
Differential distributions of C3 and C4 grass taxa correlate with geographic and climatic factors. A simple model based on the temperature dependence of the photosynthetic quantum yield of C3 plants and the lack of response of the C4 quantum yield to temperature is used to predict the global distribution of C4 grasses at current atmospheric CO2 concentrations and climate. The model predicts a cross over temperature at which the quantum yield responses intersect; at temperatures above the cross over point C4 grasses are favored over C3. The cross over temperature is about 22{degrees}C at current atmospheric CO2 concentrations. Using this criterion an accurate 1x1 degree map of C4 grass dominance over C3 grasses is produced from climatological mean monthly temperatures. Accuracy is improved by considering the co-occurrence of sufficient rainfall for growth during the months warm enough for C4 dominance. Rising temperatures and CO2 concentrations since the last glacial maximum (LGM) are expected to have an impact on past C4 grass distributions. We have used climate generated by the NCAR CCM to predict the extent of climatic regions favoring C4 over C3 since the LGM. Though low temperatures favor C3 photosynthesis, the low CO2 concentrations in the past more than off-set this effect. The extent of C4 favorable climates are predicted to have been greater during the LGM and have shrunk since then. The model does not take into account important biotic factors such as competition for light and herbivory or abiotic factors such as fire frequency that can affect the dominance of grasslands over other vegetation types.
Knight, Jasper
2016-10-01
Southwest Ireland is a critical location to examine the sensitivity of late Pleistocene glaciers to climate variability in the northeast Atlantic, because of its proximal location to Atlantic moisture sources and the presence of high mountains in the Macgillycuddy's Reeks range which acted as a focus for glacierization (Harrison et al., 2010). The extent of Last Glacial Maximum (LGM) glaciers in southwest Ireland and their link to the wider British-Irish Ice Sheet (BIIS), however, is under debate. Some models suggest that during the LGM the region was wholly inundated by ice from the larger BIIS (Warren, 1992; Sejrup et al., 2005), whereas others suggest north-flowing ice from the semi-independent Cork-Kerry Ice Cap (CKIC) was diverted around mountain peaks, resulting in exposed nunataks in the Macgillycuddy's Reeks (Anderson et al., 2001; Ballantyne et al., 2011). Cirque glaciers may also have been present on mountain slopes above this regional ice surface (Warren, 1979; Rea et al., 2004). More recently, investigations have focused on the extent and age of cirque glaciers in the Reeks, based on the mapped distribution of end moraines (Warren, 1979; Harrison et al., 2010), and on cosmogenic dates on boulders on these moraines (Harrison et al., 2010) and on associated scoured bedrock surfaces across the region (Ballantyne et al., 2011). The recent paper by Barth et al. (2016) contributes to this debate by providing nine cosmogenic 10Be ages on boulders from two moraines from one small (∼1.7 km2) and low (373 m elevation of the cirque floor) cirque basin at Alohart (52°00‧50″N, 9°40‧30″W) within the Reeks range. These dates are welcomed because they add to the lengthening list of age constraints on geomorphic activity in the region that spans the time period from the LGM to early Holocene.
Novignon, Jacob; Nonvignon, Justice
2017-06-12
Health centers in Ghana play an important role in health care delivery especially in deprived communities. They usually serve as the first line of service and meet basic health care needs. Unfortunately, these facilities are faced with inadequate resources. While health policy makers seek to increase resources committed to primary healthcare, it is important to understand the nature of inefficiencies that exist in these facilities. Therefore, the objectives of this study are threefold; (i) estimate efficiency among primary health facilities (health centers), (ii) examine the potential fiscal space from improved efficiency and (iii) investigate the efficiency disparities in public and private facilities. Data was from the 2015 Access Bottlenecks, Cost and Equity (ABCE) project conducted by the Institute for Health Metrics and Evaluation. The Stochastic Frontier Analysis (SFA) was used to estimate efficiency of health facilities. Efficiency scores were then used to compute potential savings from improved efficiency. Outpatient visits was used as output while number of personnel, hospital beds, expenditure on other capital items and administration were used as inputs. Disparities in efficiency between public and private facilities was estimated using the Nopo matching decomposition procedure. Average efficiency score across all health centers included in the sample was estimated to be 0.51. Also, average efficiency was estimated to be about 0.65 and 0.50 for private and public facilities, respectively. Significant disparities in efficiency were identified across the various administrative regions. With regards to potential fiscal space, we found that, on average, facilities could save about GH₵11,450.70 (US$7633.80) if efficiency was improved. We also found that fiscal space from efficiency gains varies across rural/urban as well as private/public facilities, if best practices are followed. The matching decomposition showed an efficiency gap of 0.29 between private
Deurs, Mikael van; Christensen, Asbjørn; Rindorf, Anna
2013-01-01
Sandeel display strong site-fidelity, and spend most of their life buried in the seabed. This strategy carries important ecological implications. Sandeels save energy when they are not foraging but in return are unable to move substantially and therefore possibly are sensitive to local depletion...... sandeel densities and growth rates per area than larger habitats...
Bhattacharya, A.; Lora, J. M.; Pollen, A.; Vollmer, T.; Thomas, M.; Leithold, E. L.; Mitchell, J.; Tripati, A.
2016-12-01
contribution. Most importantly, we find that during the Last Glacial Maximum (LGM) the Great Plains may not have witnessed an increase in the incidence of tornado frequency. Acknowledgements: James Sigman, Jacob Ashford, Jason Neff and Amato Evan
K. Arpe
2011-02-01
Full Text Available Model simulations of the last glacial maximum (21 ± 2 ka with the ECHAM3 T42 atmosphere-only, ECHAM5-MPIOM T31 atmosphere-ocean coupled and ECHAM5 T106 atmosphere-only models are compared. The topography, land-sea mask and glacier distribution for the ECHAM5 simulations were taken from the Paleoclimate Modelling Intercomparison Project Phase II (PMIP2 data set while for ECHAM3 they were taken from PMIP1. The ECHAM5-MPIOM T31 model produced its own sea surface temperatures (SST while the ECHAM5 T106 simulations were forced at the boundaries by this coupled model SSTs corrected from their present-day biases and the ECHAM3 T42 model was forced with prescribed SSTs provided by Climate/Long-Range Investigation, Mapping, and Prediction project (CLIMAP.
The SSTs in the ECHAM5-MPIOM simulation for the last glacial maximum (LGM were much warmer in the northern Atlantic than those suggested by CLIMAP or Overview of Glacial Atlantic Ocean Mapping (GLAMAP while the SSTs were cooler everywhere else. This had a clear effect on the temperatures over Europe, warmer for winters in western Europe and cooler for eastern Europe than the simulation with CLIMAP SSTs.
Considerable differences in the general circulation patterns were found in the different simulations. A ridge over western Europe for the present climate during winter in the 500 hPa height field remains in both ECHAM5 simulations for the LGM, more so in the T106 version, while the ECHAM3 CLIMAP-SST simulation provided a trough which is consistent with cooler temperatures over western Europe. The zonal wind between 30° W and 10° E shows a southward shift of the polar and subtropical jets in the simulations for the LGM, least obvious in the ECHAM5 T31 one, and an extremely strong polar jet for the ECHAM3 CLIMAP-SST run. The latter can probably be assigned to the much stronger north-south gradient in the CLIMAP SSTs. The southward shift of the polar jet during the LGM is supported by
U.S. refinery efficiency: impacts analysis and implications for fuel carbon policy implementation.
Forman, Grant S; Divita, Vincent B; Han, Jeongwoo; Cai, Hao; Elgowainy, Amgad; Wang, Michael
2014-07-01
In the next two decades, the U.S. refining industry will face significant changes resulting from a rapidly evolving domestic petroleum energy landscape. The rapid influx of domestically sourced tight light oil and relative demand shifts for gasoline and diesel will impose challenges on the ability of the U.S. refining industry to satisfy both demand and quality requirements. This study uses results from Linear Programming (LP) modeling data to examine the potential impacts of these changes on refinery, process unit, and product-specific efficiencies, focusing on current baseline efficiency values across 43 existing large U.S. refineries that are operating today. These results suggest that refinery and product-specific efficiency values are sensitive to crude quality, seasonal and regional factors, and refinery configuration and complexity, which are determined by final fuel specification requirements. Additional processing of domestically sourced tight light oil could marginally increase refinery efficiency, but these benefits could be offset by crude rebalancing. The dynamic relationship between efficiency and key parameters such as crude API gravity, sulfur content, heavy products, residual upgrading, and complexity are key to understanding possible future changes in refinery efficiency. Relative to gasoline, the efficiency of diesel production is highly variable, and is influenced by the number and severity of units required to produce diesel. To respond to future demand requirements, refiners will need to reduce the gasoline/diesel (G/D) production ratio, which will likely result in greater volumes of diesel being produced through less efficient pathways resulting in reduced efficiency, particularly on the marginal barrel of diesel. This decline in diesel efficiency could be offset by blending of Gas to Liquids (GTL) diesel, which could allow refiners to uplift intermediate fuel streams into more efficient diesel production pathways, thereby allowing for the
Faramarz eFaghihi
2015-03-01
Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.
Park, Won Young; Phadke, Amol; Shah, Nihar [Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2013-08-15
Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that PC monitor efficiency will likely improve by over 40 % by 2015 with saving potential of 4.5 TWh per year in 2015, compared to today's technology. We discuss various energy-efficiency improvement options and evaluate the cost-effectiveness of three of them, at least one of which improves efficiency by at least 20 % cost effectively beyond the ongoing market trends. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus-powered liquid crystal display monitors and find that the current technology available and deployed in them has the potential to deeply and cost effectively reduce energy consumption by as much as 50 %. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to further capture global energy saving potential from PC monitors which we estimate to be 9.2 TWh per year in 2015.
Edwards, Elizabeth J; Edwards, Mark S; Lyvers, Michael
2015-06-01
Attentional control theory (ACT) predicts that trait anxiety and situational stress interact to impair performance on tasks that involve attentional shifting. The theory suggests that anxious individuals recruit additional effort to prevent shortfalls in performance effectiveness (accuracy), with deficits becoming evident in processing efficiency (the relationship between accuracy and time taken to perform the task). These assumptions, however, have not been systematically tested. The relationship between cognitive trait anxiety, situational stress, and mental effort in a shifting task (Wisconsin Card Sorting Task) was investigated in 90 participants. Cognitive trait anxiety was operationalized using questionnaire scores, situational stress was manipulated through ego threat instructions, and mental effort was measured using a visual analogue scale. Dependent variables were performance effectiveness (an inverse proportion of perseverative errors) and processing efficiency (an inverse proportion of perseverative errors divided by response time on perseverative error trials). The predictors were not associated with performance effectiveness; however, we observed a significant 3-way interaction on processing efficiency. At higher mental effort (+1 SD), higher cognitive trait anxiety was associated with poorer efficiency independently of situational stress, whereas at lower effort (-1 SD), this relationship was highly significant and most pronounced for those in the high-stress condition. These results are important because they provide the first systematic test of the relationship between trait anxiety, situational stress, and mental effort on shifting performance. The data are also consistent with the notion that effort moderates the relationship between anxiety and shifting efficiency, but not effectiveness.
Main determinants of efficiency and implications on banking concentration in the European Union
Rafael Bautista Mesa
2014-01-01
Full Text Available This study aims to measure the main determinants influencing bank efficiency. We suggest that the bank efficiency ratio, obtained from the income statement, is positively related to the size of a bank in terms of total assets. However, we believe that such a relationship cannot be maintained for banks over a certain size. By the use of the regression analysis method, we analyze the link between bank efficiency and bank size, using a sample of 3952 banks in the European Union. Our results show that the efficiency ratio stops improving for banks with total assets over $25 billion. Previous literature, using different analysis techniques, does not reach an agreement on this point. Furthermore, our study identifies further variables which negatively affect the efficiency of banks, such as competition and lending diversification, or affect them positively, such as the wholesale funding ratio and income diversification. Our findings imply the need for different bank policies depending on total assets, in order to limit the size and activities of banks.
Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-06-29
Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.
Policies to enhance prescribing efficiency in Europe: findings and future implications
Brian eGodman
2011-01-01
Full Text Available Introduction: European countries need to learn from each other to address unsustainable increases in pharmaceutical expenditures. Objective: To assess the influence of the many supply and demand side initiatives introduced across Europe to enhance prescribing efficiency in ambulatory care. As a result provide future guidance to countries. Methods: Cross national retrospective observational study of utilisation (DDDs - Defined Daily Doses and expenditure (Euros and local currency of Proton Pump Inhibitors (PPIs and statins among 19 European countries and regions principally from 2001 to 2007. Demand side measures categorised under the ‘4Es’ – education engineering, economics and enforcement. Results: Instigating supply side initiatives to lower the price of generics combined with demand side measures to enhance their prescribing is important to maximise prescribing efficiency. Just addressing one component will limit potential efficiency gains. The influence of demand side reforms appears additive, with multiple initiatives typically having a greater influence on increasing prescribing efficiency than single measures apart from potentially ‘enforcement’. There are also appreciable differences in expenditure (€/1000 inhabitants/ year between countries. Countries that have not introduced multiple measures to counteract commercial pressures to enhance the prescribing of generics have seen expenditures up to ten fold or more greater than countries that have instigated multiple demand side measures, although in selected populations. Conclusions: There are considerable opportunities for European countries to enhance their prescribing efficiency, with countries already learning from each other. The 4E methodology allows European countries to concisely capture the range of current demand side measures and plan for the future knowing that initiatives can be additive to further enhance their prescribing efficiency.
Implications of quenching in efficiency, spectrum shape and alpha/beta separation.
Fons-Castells, J; Díaz, V; Badia, A; Tent-Petrus, J; Llauradó, M
2017-10-01
Liquid scintillation spectrometry (LSS) is a meaningful technique for the determination of alpha and beta emitters. However, this technique is highly affected by quenching phenomena, which reduce the counting efficiency, shift the spectra to low energies and cause misclassification problems. In this paper, a selection of chemical and colour quench agents was evaluated to study the influence of alpha and beta energy and the quenching effect on the detection efficiency, the shape of the spectra and the α/β misclassification. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mitea, C.; Havenaar, R.; Wouter Drijfhout, J.; Edens, L.; Dekking, L.; Koning, F.; Dekking, E.H.A.
2008-01-01
Background: Coeliac disease is caused by an immune response to gluten. As gluten proteins are proline rich they are resistant to enzymatic digestion in the gastrointestinal tract, a property that probably contributes to the immunogenic nature of gluten. Aims: This study determined the efficiency of
Mitea, C.; Havenaar, R.; Wouter Drijfhout, J.; Edens, L.; Dekking, L.; Koning, F.; Dekking, E.H.A.
2008-01-01
Background: Coeliac disease is caused by an immune response to gluten. As gluten proteins are proline rich they are resistant to enzymatic digestion in the gastrointestinal tract, a property that probably contributes to the immunogenic nature of gluten. Aims: This study determined the efficiency of
Deurs, Mikael van; Christensen, Asbjørn; Rindorf, Anna
2013-01-01
of prey. Here we studied zooplankton consumption and energy conversion efficiency of lesser sandeel (Ammodytes marinus) in the central North Sea, using stomach data, length and weight-at-age data, bioenergetics, and hydrodynamic modeling. The results suggested: (i) Lesser sandeel in the Dogger area depend...... sandeel densities and growth rates per area than larger habitats...
Carter, Ellison M; Shan, Ming; Yang, Xudong; Li, Jiarong; Baumgartner, Jill
2014-06-03
Household air pollution from solid fuel combustion is the leading environmental health risk factor globally. In China, almost half of all homes use solid fuel to meet their household energy demands. Gasifier cookstoves offer a potentially affordable, efficient, and low-polluting alternative to current solid fuel combustion technology, but pollutant emissions and energy efficiency performance of this class of stoves are poorly characterized. In this study, four Chinese gasifier cookstoves were evaluated for their pollutant emissions and efficiency using the internationally recognized water boiling test (WBT), version 4.1.2. WBT performance indicators included PM2.5, CO, and CO2 emissions and overall thermal efficiency. Laboratory investigation also included evaluation of pollutant emissions (PM2.5 and CO) under stove operating conditions designed to simulate common Chinese cooking practices. High power average overall thermal efficiencies ranged from 22 to 33%. High power average PM2.5 emissions ranged from 120 to 430 mg/MJ of useful energy, and CO emissions ranged from 1 to 30 g/MJ of useful energy. Compared with several widely disseminated "improved" cookstoves selected from the literature, on average, the four Chinese gasifier cookstoves had lower PM2.5 emissions and higher CO emissions. The recent International Organization for Standardization (ISO) International Workshop Agreement on tiered cookstove ranking was developed to help classify stove performance and identify the best-performing stoves. The results from this study highlight potential ways to further improve this approach. Medium power stove operation emitted nearly twice as much PM2.5 as was emitted during high power stove operation, and the lighting phase of a cooking event contributed 45% and 34% of total PM2.5 emissions (combined lighting and cooking). Future approaches to laboratory-based testing of advanced cookstoves could improve to include greater differentiation between different modes of
Laws, E.A. (Hawaii Univ., Honolulu, HI (United States). Dept. of Oceanography Hawaii Inst. of Marine Biology, Honolulu, HI (United States)); Berning, J.L. (Electric Power Research Inst., Palo Alto, CA (United States))
1991-01-01
A photosynthetic efficiency (PE) with which the macroalga Gracilaria tikvihae converts visible light energy into chemical energy was studied as a function of irradiance, temperature and salinity in tumble culture systems at the Natural Energy Laboratory of Hawaii. The photosynthesis/irradiance curve exhibited a typical hyperbolic shape, the associated PEs being a maximum at a visible irradiance of about 500 kcal m{sup -2} day{sup -1}. The highest PEs were obtained in seawater diluted by 10% with freshwater; the maximum PEs under these conditions exceeded 7% in full sunlight and 12% in the region of optimal irradiance. PEs were almost identical at 21{sup o}C and 25{sup o}C, but declined sharply as the temperature was reduced below 21{sup o}C. Conversion of the algal biomass to methane by anaerobic fermentation resulted in conversion efficiencies as high at 22% at a detention time of 15 days. This efficiency is substantially higher than results reported in an earlier study, the difference apparently reflecting the use of freshwater rather than seawater as the fermentation medium. To the extent that CO{sub 2} emissions from electric power plants are reduced by scrubbing the stack gases, growth of algae such as G. tikvihae may be the most logical way to utilize the CO{sub 2}. If 20% of the Co{sub 2} presently emitted by coal-fueled power plants in the US were used to grow algae, an area of land equal to roughly 1% of the area of the United States would be required for the growth of the algae. (author).
Highly efficient forward osmosis based on porous membranes--applications and implications.
Qi, Saren; Li, Ye; Zhao, Yang; Li, Weiyi; Tang, Chuyang Y
2015-04-07
For the first time, forward osmosis (FO) was performed using a porous membrane with an ultrafiltration (UF)-like rejection layer and its feasibility for high performance FO filtration was demonstrated. Compared to traditional FO membranes with dense rejection layers, the UF-like FO membrane was 2 orders of magnitude more permeable. This gave rise to respectable FO water flux even at ultralow osmotic driving force, for example, 7.6 L/m(2).h at an osmotic pressure of merely 0.11 bar (achieved by using a 0.1% poly(sodium 4-styrene-sulfonate) draw solution). The membrane was applied to oil/water separation, and a highly stable FO water flux was achieved. The adoption of porous FO membranes opens a door to many new opportunities, with potential applications ranging from wastewater treatment, valuable product recovery, and biomedical applications. The potential applications and implications of porous FO membranes are addressed in this paper.
Yan, Xiaoyu; Inderwildi, Oliver R; King, David A; Boies, Adam M
2013-06-01
Bioethanol is the world's largest-produced alternative to petroleum-derived transportation fuels due to its compatibility within existing spark-ignition engines and its relatively mature production technology. Despite its success, questions remain over the greenhouse gas (GHG) implications of fuel ethanol use with many studies showing significant impacts of differences in land use, feedstock, and refinery operation. While most efforts to quantify life-cycle GHG impacts have focused on the production stage, a few recent studies have acknowledged the effect of ethanol on engine performance and incorporated these effects into the fuel life cycle. These studies have broadly asserted that vehicle efficiency increases with ethanol use to justify reducing the GHG impact of ethanol. These results seem to conflict with the general notion that ethanol decreases the fuel efficiency (or increases the fuel consumption) of vehicles due to the lower volumetric energy content of ethanol when compared to gasoline. Here we argue that due to the increased emphasis on alternative fuels with drastically differing energy densities, vehicle efficiency should be evaluated based on energy rather than volume. When done so, we show that efficiency of existing vehicles can be affected by ethanol content, but these impacts can serve to have both positive and negative effects and are highly uncertain (ranging from -15% to +24%). As a result, uncertainties in the net GHG effect of ethanol, particularly when used in a low-level blend with gasoline, are considerably larger than previously estimated (standard deviations increase by >10% and >200% when used in high and low blends, respectively). Technical options exist to improve vehicle efficiency through smarter use of ethanol though changes to the vehicle fleets and fuel infrastructure would be required. Future biofuel policies should promote synergies between the vehicle and fuel industries in order to maximize the society-wise benefits or
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Toly Chen
2014-08-01
Full Text Available Cycle time management plays an important role in improving the performance of a wafer fabrication factory. It starts from the estimation of the cycle time of each job in the wafer fabrication factory. Although this topic has been widely investigated, several issues still need to be addressed, such as how to classify jobs suitable for the same estimation mechanism into the same group. In contrast, in most existing methods, jobs are classified according to their attributes. However, the differences between the attributes of two jobs may not be reflected on their cycle times. The bi-objective nature of classification and regression tree (CART makes it especially suitable for tackling this problem. However, in CART, the cycle times of jobs of a branch are estimated with the same value, which is far from accurate. For these reason, this study proposes a joint use of principal component analysis (PCA, CART, and back propagation network (BPN, in which PCA is applied to construct a series of linear combinations of original variables to form new variables that are as unrelated to each other as possible. According to the new variables, jobs are classified using CART before estimating their cycle times with BPNs. A real case was used to evaluate the effectiveness of the proposed methodology. The experimental results supported the superiority of the proposed methodology over some existing methods. In addition, the managerial implications of the proposed methodology are also discussed with an example.
THE IMPLICATIONS OF SINGLE EURO PAYMENTS AREA (SEPA ON BANKING EFFICIENCY
Mihaita-Cosmin POPOVICI
2014-09-01
Full Text Available With the creation of the euro by the Maastricht Treaty in 1992, European integration has deepened. Even with this step done the financial market is fragmented. In order to eliminate this disadvantage, the European Union has taken a number of measures. The first step is the Financial Services Action Plan in 2000, through the Lisbon Strategy. Second is the European Commission Regulation 2560/2001 to harmonise fees for cross border and domestic euro transactions. Third is the first pan-European Automated Clearing House in 2003. Last great step made is the Single Euro Payments Area (SEPA in 2008. In this paper, we want to research the degree of implementation of SEPA by using quantitative indicators: credit transfers, direct debits and payment cards, and the effects of this system on bank efficiency.
Medical Savings Account: Implications for consumer choice, individual responsibility and efficiency
Mukherjee Kanchan
2012-04-01
Full Text Available Context: The idea of Medical Savings Account (MSA was conceived with the objectives to reduce moral hazard, decrease cost of health care, enhance individual responsibility and improve efficiency. However, it is important to note that no implementation of an MSA healthcare policy framework has been perfect. Aims: This paper looks at the broader context of current health policies in different countries and analyzes the reasons why MSAs were incorporated into action and the effects of these implementations. Methods and Material: Secondary literature review was done to analyse the theoretical and empirical evidence with respect to MSAs. Results: Conceptually, MSAs can help eliminate the unnecessary overuse of healthcare by placing more of the financial burden onto the consumer, whereby encouraging individual responsibility. However, for true choice to be provided there needs to be an excess capacity in the system and, in addition, a workforce that is responsive to the diversity of patient’s wishes. From an economic perspective, the notion that MSA has an instrumental value in achieving an optimum allocation of resources is based on the standard economic theory of markets with its assumptions which do not always hold true in the real world. Hence, efficiency may be compromised by giving ‘voice’ to choice. Conclusions: There are drawbacks with all financing systems of healthcare, and MSAs are no exception. Future researchers should consider conducting further studies to see if quality and access to necessary healthcare has improved within an MSA system and if adding supply-side regulations in conjunction with an MSA system produces better results than each would individually.
Stepniak, Dariusz; Spaenij-Dekking, Liesbeth; Mitea, Cristina; Moester, Martine; de Ru, Arnoud; Baak-Pablo, Renee; van Veelen, Peter; Edens, Luppo; Koning, Frits
2006-10-01
Celiac disease is a T cell-driven intolerance to wheat gluten. The gluten-derived T cell epitopes are proline-rich and thereby highly resistant to proteolytic degradation within the gastrointestinal tract. Oral supplementation with prolyl oligopeptidases has therefore been proposed as a potential therapeutic approach. The enzymes studied, however, have limitations as they are irreversibly inactivated by pepsin and acidic pH, both present in the stomach. As a consequence, these enzymes will fail to degrade gluten before it reaches the small intestine, the site where gluten induces inflammatory T cell responses that lead to celiac disease. We have now determined the usefulness of a newly identified prolyl endoprotease from Aspergillus niger for this purpose. Gluten and its peptic/tryptic digest were treated with prolyl endoprotease, and the destruction of the T cell epitopes was tested using mass spectrometry, T cell proliferation assays, ELISA, reverse-phase HPLC, SDS-PAGE, and Western blotting. We observed that the A. niger prolyl endoprotease works optimally at 4-5 pH, remains stable at 2 pH, and is completely resistant to digestion with pepsin. Moreover, the A. niger-derived enzyme efficiently degraded all tested T cell stimulatory peptides as well as intact gluten molecules. On average, the endoprotease from A. niger degraded gluten peptides 60 times faster than a prolyl oligopeptidase. Together these results indicate that the enzyme from A. niger efficiently degrades gluten proteins. Future studies are required to determine if the prolyl endoprotease can be used as an oral supplement to reduce gluten intake in patients.
Colas des Francs, G.; Barthes, J.; Bouhelier, A.; Weeber, J. C.; Dereux, A.; Cuche, A.; Girard, C.
2016-09-01
The Purcell factor F p is a key quantity in cavity quantum electrodynamics (cQED) that quantifies the coupling rate between a dipolar emitter and a cavity mode. Its simple form {F}{{p}}\\propto Q/V unravels the possible strategies to enhance and control light-matter interaction. Practically, efficient light-matter interaction is achieved thanks to either (i) high quality factor Q at the basis of cQED or (ii) low modal volume V at the basis of nanophotonics and plasmonics. In the last decade, strong efforts have been done to derive a plasmonic Purcell factor in order to transpose cQED concepts to the nanocale, in a scale-law approach. In this work, we discuss the plasmonic Purcell factor for both delocalized (SPP) and localized (LSP) surface-plasmon-polaritons and briefly summarize the expected applications for nanophotonics. On the basis of the SPP resonance shape (Lorentzian or Fano profile), we derive closed form expression for the coupling rate to delocalized plasmons. The quality factor factor and modal confinement of both SPP and LSP are quantified, demonstrating their strongly subwavelength behavior.
Meneses, M J; Bernardino, R L; Sá, R; Silva, J; Barros, A; Sousa, M; Silva, B M; Oliveira, P F; Alves, M G
2016-10-01
Pioglitazone is a synthetic agonist for the nuclear receptor peroxisome proliferator-activated receptor γ used to treat type 2 diabetes mellitus. Recently we reported that antidiabetic drugs regulate the nutritional support of spermatogenesis by Sertoli cells. Herein, we investigate the effects of pioglitazone on human Sertoli cells metabolism. Human Sertoli cells were cultured in the presence of pioglitazone (1, 10, 100μM). Protein levels of phosphofructokinase 1, lactate dehydrogenase, hexokinase, glucose transporters (GLUT1, GLUT2, GLUT3), monocarboxylate transporter 4 and oxidative phosphorylation complexes were determined by Western blot. Lactate dehydrogenase and alanine aminotransferase activity were assessed and metabolite production and consumption determined by proton nuclear magnetic resonance. Mitochondrial membrane potential was also determined. Glucose consumption more than doubled in human Sertoli cells stimulated with pioglitazone 100μM. Mitochondrial complex II protein levels increased 50% with exposure to pioglitazone (100μM) in human Sertoli cells, though mitochondrial membrane potential was decreased by 32%. The pharmacological concentration of pioglitazone (10μM) almost doubled lactate production and established crucial correlations among key intervenient of glycolysis. Moreover, in the same concentration, alanine aminotransferase decreased more than 80%. Our results suggest that pioglitazone (10μM) increases the efficiency of the glycolytic flux and lactate production by human Sertoli cells, which is essential to sustain and preserve the spermatogenic event. Thus, pioglitazone may improve male fertility and thus, be considered a suitable antidiabetic drug for men in reproductive age.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
张玖霞; 方杰
2011-01-01
In this paper,Meihekou scale intensive arable land to achieve good results as the starting point,the transfer of land from the government guidance to promote,develop policies to create conditions for the scale,speed up the transfer of rural labor to expand the scale of operation in space in the analysis of Meihekou scale intensive arable land on the remarkable results.Meanwhile,for the land transfer process Meihekou exist in many non-standard issues,from land to carry out intensive,in order to achieve maximum efficiency of land use perspective,on how to do large-scale land operation Meihekou proposed measures.%本文以梅河口市做好耕地集约规模经营取得的成效为切入点,从政府引导推动土地流转,制定优惠扶持政策为规模经营创造条件,加快农村劳动力转移为规模经营拓展空间等方面分析了梅河口市在耕地集约规模经营上取得的显著成效。同时,针对梅河口市在土地流转过程中存在的问题,从实现土地使用效益最大化的视角,对梅河口市如何做好土地规模经营提出了相关的对策。
Mundaca, Luis [International Institute for Industrial Environmental Economics at Lund University, P.O. Box 196, SE-221 00 Lund (Sweden)], E-mail: Luis.Mundaca@iiiee.lu.se
2008-11-15
Recent developments in European energy policy reveal an increasing interest in implementing the so-called 'Tradable White Certificate' (TWC) schemes to improve energy efficiency. Based on three evaluation criteria (cost-effectiveness, environmental effectiveness and distributional equity) this paper analyses the implications of implementing a European-wide TWC scheme targeting the household and commercial sectors. Using a bottom-up model, quantitative results show significant cost-effective potentials for improvements (ca. 1400 TWh in cumulative energy savings by 2020), with the household sector, gas and space heating representing most of the TWC supply in terms of eligible sector, fuel and energy service demand, respectively. If a single market price of negative externalities is considered, a societal cost-effective potential of energy savings above 30% (compared to the baseline) is observed. In environmental terms, the resulting greenhouse gas emission reductions are around 200 Mt CO{sub 2-eq} by 2010, representing nearly 60% of the EU-Kyoto-target. From the qualitative perspective, several embedded ancillary benefits are identified (e.g. employment generation, improved comfort level, reduced 'fuel poverty', security of energy supply). Whereas an EU-wide TWC increases liquidity and reduces the risks of market power, autarky compliance strategies may be expected in order to capture co-benefits nationally. Cross subsidies could occur due to investment recovery mechanisms and there is a risk that effects may be regressive for low-income households. Assumptions undertaken by the modelling approach strongly indicate that high effectiveness of other policy instruments is needed for an EU-wide TWC scheme to be cost-effective.
Mundaca, Luis [International Institute for Industrial Environmental Economics at Lund University, P.O. Box 196, SE-221 00 Lund (Sweden)
2008-11-15
Recent developments in European energy policy reveal an increasing interest in implementing the so-called 'Tradable White Certificate' (TWC) schemes to improve energy efficiency. Based on three evaluation criteria (cost-effectiveness, environmental effectiveness and distributional equity) this paper analyses the implications of implementing a European-wide TWC scheme targeting the household and commercial sectors. Using a bottom-up model, quantitative results show significant cost-effective potentials for improvements (ca. 1400 TWh in cumulative energy savings by 2020), with the household sector, gas and space heating representing most of the TWC supply in terms of eligible sector, fuel and energy service demand, respectively. If a single market price of negative externalities is considered, a societal cost-effective potential of energy savings above 30% (compared to the baseline) is observed. In environmental terms, the resulting greenhouse gas emission reductions are around 200 Mt CO{sub 2-eq} by 2010, representing nearly 60% of the EU-Kyoto-target. From the qualitative perspective, several embedded ancillary benefits are identified (e.g. employment generation, improved comfort level, reduced 'fuel poverty', security of energy supply). Whereas an EU-wide TWC increases liquidity and reduces the risks of market power, autarky compliance strategies may be expected in order to capture co-benefits nationally. Cross subsidies could occur due to investment recovery mechanisms and there is a risk that effects may be regressive for low-income households. Assumptions undertaken by the modelling approach strongly indicate that high effectiveness of other policy instruments is needed for an EU-wide TWC scheme to be cost-effective. (author)
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Gramlich, Jacob Pleune
I develop, estimate, and utilize an economic model of the U.S. automobile industry. I do so to address policy questions concerning automotive fuel efficiency (the relationship between gasoline used and distance traveled). Fuel efficiency has played a prominent role in our domestic energy policy for over 30 years. Recently it has received even more attention due to rising gas prices and concern over the environment and energy dependence. The model gives quantitative predictions for market fuel efficiency at various gas prices and taxes. The model makes contributions that are both methodological and policy based, and the two chapters of the dissertation focus on each in turn. The first chapter discusses the economic model of the U.S. automobile industry. The model allows firms to choose the fuel efficiency of their new vehicles, which allows me to predict fuel efficiency responses to policy and market conditions. These predictions were not possible with previous economic models which held fuel efficiency fixed. In the model, consumers care more about fuel efficiency when gas prices are high, and firms face a technological tradeoff between providing fuel efficiency and other quality. The level of the gas price, therefore, working through consumer demand, shifts firms' optimal locations along this technology frontier. Demand is nested logit, supply is differentiated products oligopoly, and data are from the U.S. automobile market from 1971-2007. In addition to endogenizing product choice, I also contribute to the modeling literature by relaxing restrictive identifying assumptions and obtaining more realistic estimates of fuel efficiency preference. The model predicts sales declines and compositions from the summer of 2008 with reasonable success. The second chapter discusses two counterfactual policy scenarios: maintained summer 2008 gas prices, and achieving 35 mpg (miles per gallon). At 3.43 per gallon (the summer 2008 price, 23% above 2007), the model predicts
Andersen, Rikke Sand; Vedsted, Peter
2015-01-01
efficiency in order to deal with uncertainties and meet more complex or unpredictable needs. Lastly, building on the empirical case of cancer diagnostics, we discuss the implications of the pervasiveness of the logic of efficiency in the clinical setting and argue that provision of medical care in today......'s primary care settings requires careful balancing of increasing demands of efficiency, greater complexity of biomedical knowledge and consideration for individual patient needs....
Fernández-Fernández, José M.; Andrés, Nuria; Brynjólfsson, Skafti; Sæmundsson, Þorsteinn; Palacios, David
2017-04-01
The Tröllaskagi peninsula is located in northern Iceland, between meridians 19°30'W and 18°10'W, jutting out into the North Atlantic to latitude 66°12'N and joining the central highlands to the south. About 150 glaciers located on the Tröllaskagi peninsula reached their Holocene maximum extent during the Little Ice Age (LIA) maximum at the end of the 19th century. The sudden warming at the turn of the 20th century triggered a continuous retreat from the LIA maximum positions, interrupted by a reversal trend during the mid-seventies and eighties in response to a brief period of climate cooling. The aim of this paper is to analyze the relationships between glacial and climatic evolution since the LIA maximum. For this reason, we selected three small debris-free glaciers: Gljúfurárjökull, and western and eastern Tungnahryggsjökull, at the headwalls of Skíðadalur and Kolbeinsdalur, as their absence of debris cover makes them sensitive to climatic fluctuations. To achieve this purpose, we used ArcGIS to map the glacier extent during the LIA maximum and several dates over four georeferenced aerial photos (1946, 1985, 1994 and 2000), as well as a 2005 SPOT satellite image. Then, the Equilibrium-Line Altitude (ELA) was calculated by applying the Accumulation Area Ratio (AAR) and Area Altitude Balance Ratio (AABR) approaches. Climatological data series from the nearby weather stations were used in order to analyze climate development and to estimate precipitation at the ELA with different numerical models. Our results show considerable changes of the three debris-free glaciers and demonstrates their sensitivity to climatic fluctuations. As a result of the abrupt climatic transition of the 20th century, the following warm 25-year period and the warming started in the late eighties, the three glaciers retreated by ca. 990-1330 m from the LIA maximum to 2005, supported by a 40-metre ELA rise and a reduction of their area and volume of 25% and 33% on average
Gillig, Dhazn; McCarl, Bruce A.; Jones, Lonnie L.; Boadu, Frederick
2004-04-01
Groundwater management in the Edwards Aquifer in Texas is in the process of moving away from a traditional right of capture economic regime toward a more environmentally sensitive scheme designed to preserve endangered species habitats. This study explores economic and environmental implications of proposed groundwater management and water development strategies under a proposed regional Habitat Conservation Plan. Results show that enhancing the habitat by augmenting water flow costs $109-1427 per acre-foot and that regional water development would be accelerated by the more extreme possibilities under the Habitat Conservation Plan. The findings also indicate that a water market would improve regional welfare and lower water development but worsen environmental attributes.
Marco Zitti
2015-03-01
Full Text Available The present study illustrates a multidimensional analysis of an indicator of urban land use efficiency (per-capita built-up area, LUE in mainland Attica, a Mediterranean urban region, along different expansion waves (1960–2010: compaction and densification in the 1960s, dispersed growth along the coasts and on Athens’ fringe in the 1970s, fringe consolidation in the 1980s, moderate re-polarization and discontinuous expansion in the 1990s and sprawl in remote areas in the 2000s. The non-linear trend in LUE (a continuous increase up to the 1980s and a moderate decrease in 1990 and 2000 preceding the rise observed over the last decade reflects Athens’ expansion waves. A total of 23 indicators were collected by decade for each municipality of the study area with the aim of identifying the drivers of land use efficiency. In 1960, municipalities with low efficiency in the use of land were concentrated on both coastal areas and Athens’ fringe, while in 2010, the lowest efficiency rate was observed in the most remote, rural areas. Typical urban functions (e.g., mixed land uses, multiple-use buildings, vertical profile are the variables most associated with high efficiency in the use of land. Policies for sustainable land management should consider local and regional factors shaping land use efficiency promoting self-contained expansion and more tightly protecting rural and remote land from dispersed urbanization. LUE is a promising indicator reflecting the increased complexity of growth patterns and may anticipate future urban trends.
Levy, Jonathan I; Wilson, Andrew M; Zwack, Leonard M
2007-05-01
In deciding among competing approaches for emissions control, debates often hinge on the potential tradeoffs between efficiency and equity. However, previous health benefits analyses have not formally addressed both dimensions. We modeled the public health benefits and the change in the spatial inequality of health risk for a number of hypothetical control scenarios for power plants in the United States to determine optimal control strategies. We simulated various ways by which emission reductions of sulfur dioxide (SO(2)), nitrogen oxides, and fine particulate matter (particulate matter pollution control strategies, allowing for joint consideration of efficiency and equity.
张艳超; 何济洲
2014-01-01
在低耗散卡诺热机模型的基础上，进一步研究热漏对低耗散卡诺热机最大功率下效率及其边界的影响。在类卡诺热机循环条件下，考虑等温膨胀与等温压缩过程中高低温热源之间存在热漏，推导出存在热漏时低耗散卡诺热机最大功率下效率的表达式，并且在对称情况下与经典CA(Curzon-Ahlborn)效率进行比较。发现当不存在热漏时，低耗散卡诺热机最大功率下的效率等于CA效率。当存在热漏时，低耗散卡诺热机最大功率下的效率低于CA效率，并随着热漏的增加而降低。在非对称下得到存在热漏时低耗散卡诺热机最大功率下效率的上下限和可观测范围，并与不同种类实际的热机效率进行比较，结果表明考虑热漏时低耗散卡诺热机的效率及其边界更加符合实际热机的观测值。%Based on the low-dissipation Carnot heat engine model, the influence of heat leak on the efficiency at maximum power and its bounds of low-dissipation Carnot heat engine are further discussed. Under the condition of Carnot-like heat engine cycle, the expressions for the efficiency at maximum power of the quantum dot engine are derived in the presence of heat leak between hot reservoir and cold reservoir of the isothermal expansion and the isothermal compression process, and compared with the classical CA efficiency in the symmetric case. It is found that, when there is no heat leak, the efficiency at maximum power of the low-dissipation Carnot heat engine is equal to the CA efficiency. In the presence of heat leak, the efficiency at maximum power of the low-dissipation Carnot heat engine is lower than the CA efficiency, and decreases with the increases of heat leak. In the case of asymmetric, the upper bound and lower bound of efficiency at maximum power are obtained, and compared with different kinds of actual engine efficiency. The results show that the efficiency at maximum power and its
Bøjer, Martin; Jensen, Peter Arendt; Dam-Johansen, Kim;
2010-01-01
A relatively low electrical efficiency of 20−25% is obtained in typical west European waste boilers. Ash species released from the grate combustion zone form boiler deposits with high concentrations of Cl, Na, K, Zn, Pb, and S that cause corrosion of superheater tubes at high temperature. The sup......A relatively low electrical efficiency of 20−25% is obtained in typical west European waste boilers. Ash species released from the grate combustion zone form boiler deposits with high concentrations of Cl, Na, K, Zn, Pb, and S that cause corrosion of superheater tubes at high temperature....... The superheater steam temperature has to be limited to around 425 °C, and thereby, the electrical efficiency remains low compared to wood or coal-fired boilers. If a separate part of the flue gas from the grate has a low content of corrosive species, it may be used to superheat steam to a higher temperature......, and thereby, the electrical efficiency of the plant can be increased. In this study, the local temperature, the gas concentrations of CO, CO2, and O2, and the release of the volatile elements Cl, S, Na, K, Pb, Zn, Cu, and Sn were measured above the grate in a waste boiler to investigate if a selected fraction...
Stange, P.; Bach, L. T.; Le Moigne, F. A. C.; Taucher, J.; Boxhammer, T.; Riebesell, U.
2017-01-01
The ocean's potential to export carbon to depth partly depends on the fraction of primary production (PP) sinking out of the euphotic zone (i.e., the e-ratio). Measurements of PP and export flux are often performed simultaneously in the field, although there is a temporal delay between those parameters. Thus, resulting e-ratio estimates often incorrectly assume an instantaneous downward export of PP to export flux. Evaluating results from four mesocosm studies, we find that peaks in organic matter sedimentation lag chlorophyll a peaks by 2 to 15 days. We discuss the implications of these time lags (TLs) for current e-ratio estimates and evaluate potential controls of TL. Our analysis reveals a strong correlation between TL and the duration of chlorophyll a buildup, indicating a dependency of TL on plankton food web dynamics. This study is one step further toward time-corrected e-ratio estimates.
Barth, Aaron M.; Clark, Peter U.; Clark, Jorie; McCabe, A. Marshall; Caffee, Marc
2016-10-01
We concluded that our new 10Be chronology records onset of retreat of a cirque glacier within the Alohart basin of southwestern Ireland 24.5 ± 1.4 ka, placing limiting constraints on reconstructions of the Irish Ice Sheet (IIS) and Kerry-Cork Ice Cap (KCIC) during the Last Glacial Maximum (LGM) (Barth et al., 2016). Knight (2016) raises two main arguments against our interpretation: (1) the glacier in the Alohart basin was not a cirque glacier, but instead a southern-sourced ice tongue from the KCIC overtopping the MacGillycuddy's Reeks, and (2) that the boulders we sampled for 10Be exposure dating were derived from supraglacial rockfall rather than transported subglacially, experienced nuclide inheritance, and are thus too old. In the following, we address both of these arguments.
Pan, S.; Yang, J.; Zhang, J.; Xu, R.; Dangal, S. R. S.; Zhang, B.; Tian, H.
2016-12-01
Africa is one of the most vulnerable regions in the world to climate change and climate variability. Much concern has been raised about the impacts of climate and other environmental factors on water resource and food security through the climate-water-food nexus. Understanding the responses of crop yield and water use efficiency to environmental changes is particularly important because Africa is well known for widespread poverty, slow economic growth and agricultural systems particularly sensitive to frequent and persistent droughts. However, the lack of integrated understanding has limited our ability to quantify and predict the potential of Africa's agricultural sustainability and freshwater supply, and to better manage the system for meeting an increasing food demand in a way that is socially and environmentally or ecologically sustainable. By using the Dynamic Land Ecosystem Model (DLEM-AG2) driven by spatially-explicit information on land use, climate and other environmental changes, we have assessed the spatial and temporal patterns of crop yield, evapotranspiration (ET) and water use efficiency across entire Africa in the past 35 years (1980-2015) and the rest of the 21st century (2016-2099). Our preliminary results indicate that African crop yield in the past three decades shows an increasing trend primarily due to cropland expansion (about 50%), elevated atmospheric CO2 concentration, and nitrogen deposition. However, crop yield shows substantially spatial and temporal variation due to inter-annual and inter-decadal climate variability and spatial heterogeneity of environmental drivers. Climate extremes especially droughts and heat wave have largely reduced crop yield in the most vulnerable regions. Our results indicate that N fertilizer could be a major driver to improve food security in Africa. Future climate warming could reduce crop yield and shift cropland distribution. Our study further suggests that improving water use efficiency through land
Wang, Wei
2016-01-01
The mineral sphalerite (ZnS) is a typical constituent at the periphery of submarine hydrothermal deposits on Earth. It has been frequently suggested to have played an important role in the prebiotic chemistry due to its prominent photocatalytic activity. Nevertheless, the need for {\\lambda} 450 nm light irradiation, the photocatalyst Zn1-xCuxS can drive the reduction of fumaric acid to produce succinic acid. Given the existence of this doped semiconductor in the hydrothermal vents on early Earth and its capability to utilize both UV and visible light, ZnS might have participated more efficiently than ever estimated in the prebiotic chemical evolution.
Verhoef, E.T. [Free University Amsterdam, Department of Spatial Economics, Amsterdam (Netherlands); Van Wee, B. [National Institute of Public Health and the Environment (RIVM), Bilthoven (Netherlands)
2000-07-01
Research on 'happiness' suggests that once an average per capita income of around US$10,000 is achieved in a country, further increases in income will not lead to a significant increase in happiness. Additional income will probably often be spent on the satisfaction of mainly 'relative' needs, of which 'status goods' would be one example. From that perspective, an overall shift to more fuel-efficient cars (i.e. smaller cars with less power) would not necessarily, or only to a limited extent, result in less happiness. From a welfare economic perspective, the satisfaction of the relative needs pertaining to consumption can be considered as a form of consumption externalities. This creates a welfare economic basis for government intervention. A model in which these consumption externalities are studied is presented here. Government intervention would include stimulating consumption of lower-status goods and discouraging consumption of higher-status ones. We speculate, however, that to achieve a significant increase in the fuel efficiency of a country's car fleet through pricing policies, huge price increases may often be needed. As acceptance of price increases as a policy instrument is often low, 'fee-bates' and tradeable permits may be more preferable instruments. 36 refs.
Mark eMeekan
2015-09-01
Full Text Available The largest animals in the oceans eat prey that are orders of magnitude smaller than themselves, implying strong selection for cost-effective foraging to meet their energy demands. Whale sharks (Rhincodon typus may be especially challenged by warm seas that elevate their metabolism and contain sparse prey resources. Using a combination of biologging and satellite tagging, we show that whale sharks use four strategies to save energy and improve foraging efficiency: 1 fixed, low power swimming, 2 constant low speed swimming, 3 gliding and 4 asymmetrical diving. These strategies increase foraging efficiency by 22 – 32% relative to swimming horizontally and resolve the energy-budget paradox of whale sharks. However, sharks in the open ocean must access food resources that reside in relatively cold waters (up to 20oC cooler than the surface at depths of 250-500 m during the daytime, where long, slow gliding descents, continuous ram ventilation of the gills and filter-feeding could rapidly cool the circulating blood and body tissues. We suggest that whale sharks may overcome this problem through their large size and a specialized body plan that isolates highly vascularized red muscle on the dorsal surface, allowing heat to be retained near the centre of the body within a massive core of white muscle. This could allow a warm-adapted species to maintain enhanced function of organs and sensory systems while exploiting food resources in deep, cool water.
Narwade, Shankar S.; Mulik, Balaji B.; Mali, Shivsharan M.; Sathe, Bhaskar R.
2017-02-01
Herein, we report the synthesis of silver nanoparticles (Ag NPs; 10 ± 0.5 nm) sensitized Fullerene (C60; 15 ±2 nm) nanocatalysts (Ag@C60) for the first time showing efficient electroatalytic activity for the oxidation of hydrazine demonstrating activity comparable to that of Pt in acidic, neutral and basic media. The performance is comparable with the best available electrocatalytic system and plays a vital role in the overall hydrogen generation reactions from hydrazine as a one of the fuel cell reaction. The materials are synthesized by a simple and scalable synthetic route involving acid functionalization of C60 followed by chemical reduction of Ag+ ions in ethylene glycol at high temperature. The distributation of Silver nanoparticles (Ag NPs) (morphological information) on C60, bonding, its crystal structure, along with activity towards hydrazine oxidation (electrocatalytic) is studied using TEM, XRD, UV-vis, XPS, FTIR and electrochemical (cyclic voltammetry) studies, respectively. The observed efficient electrocatalytic activity of the as-synthesized electrode is attributed to the co-operative response and associated structural defects due to their oxidative functionalization along with thier cooperative functioning at nanodimensions.
Sefkow, Adam B.; Bennett, Guy R.
2010-09-01
Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: as the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.
Gitelson, Anatoly A; Peng, Yi; Arkebauer, Timothy J; Suyker, Andrew E
2015-04-01
Vegetation productivity metrics such as gross primary production (GPP) at the canopy scale are greatly affected by the efficiency of using absorbed radiation for photosynthesis, or light use efficiency (LUE). Thus, close investigation of the relationships between canopy GPP and photosynthetically active radiation absorbed by vegetation is the basis for quantification of LUE. We used multiyear observations over irrigated and rainfed contrasting C3 (soybean) and C4 (maize) crops having different physiology, leaf structure, and canopy architecture to establish the relationships between canopy GPP and radiation absorbed by vegetation and quantify LUE. Although multiple LUE definitions are reported in the literature, we used a definition of efficiency of light use by photosynthetically active "green" vegetation (LUE(green)) based on radiation absorbed by "green" photosynthetically active vegetation on a daily basis. We quantified, irreversible slowly changing seasonal (constitutive) and rapidly day-to-day changing (facultative) LUE(green), as well as sensitivity of LUE(green) to the magnitude of incident radiation and drought events. Large (2-3-fold) variation of daily LUE(green) over the course of a growing season that is governed by crop physiological and phenological status was observed. The day-to-day variations of LUE(green) oscillated with magnitude 10-15% around the seasonal LUE(green) trend and appeared to be closely related to day-to-day variations of magnitude and composition of incident radiation. Our results show the high variability of LUE(green) between C3 and C4 crop species (1.43 g C/MJ vs. 2.24 g C/MJ, respectively), as well as within single crop species (i.e., maize or soybean). This implies that assuming LUE(green) as a constant value in GPP models is not warranted for the crops studied, and brings unpredictable uncertainties of remote GPP estimation, which should be accounted for in LUE models. The uncertainty of GPP estimation due to facultative and
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Levy, J.I.; Wilson, A.M.; Zwack, L.M. [Harvard University, Boston, MA (United States). School for Public Health
2007-05-15
We modeled the public health benefits and the change in the spatial inequality of health risk for a number of hypothetical control scenarios for power plants in the United States to determine optimal control strategies. We simulated various ways by which emission reductions of sulfur dioxide (SO{sub 2}), nitrogen oxides, and fine particulate matter (PM2.5) could be distributed to reach national emissions caps. We applied a source-receptor matrix to determine the PM2.5 concentration changes associated with each control scenario and estimated the mortality reductions. We estimated changes in the spatial inequality of health risk using the Atkinson index and other indicators, following previously derived axioms for measuring health risk inequality. In our baseline model, benefits ranged from 17,000-21,000 fewer premature deaths per year across control scenarios. Scenarios with greater health benefits also tended to have greater reductions in the spatial inequality of health risk, as many sources with high health benefits per unit emissions of SO{sub 2} were in areas with high background PM2.5 concentrations. Sensitivity analyses indicated that conclusions were generally robust to the choice of indicator and other model specifications. Our analysis demonstrates an approach for formally quantifying both the magnitude and spatial distribution of health benefits of pollution control strategies, allowing for joint consideration of efficiency and equity.
D. Evans
2015-07-01
Full Text Available Much of our knowledge of past ocean temperatures comes from the foraminifera Mg / Ca palaeothermometer. Several non-thermal controls on foraminifera Mg incorporation have been identified, of which vital-effects, salinity and secular variation in seawater Mg / Ca are the most commonly considered. Ocean carbonate chemistry is also known to influence Mg / Ca, yet this is rarely considered as a source of uncertainty either because (1 precise pH and [CO32−] reconstructions are sparse, or (2 it is not clear from existing culture studies how a correction should be applied. We present new culture data of the relationship between carbonate chemistry for the surface-dwelling planktic species Globigerinoides ruber, and compare our results to data compiled from existing studies. We find a coherent relationship between Mg / Ca and the carbonate system and argue that pH rather than [CO32−] is likely to be the dominant control. Applying these new calibrations to datasets for the Paleocene–Eocene Thermal Maximum (PETM and Eocene–Oligocene Transition (EOT enable us to produce a more accurate picture of surface hydrology change for the former, and a reassessment of the amount of subtropical precursor cooling for the latter. We show that properly corrected Mg / Ca and δ18O datasets for the PETM imply no salinity change, and that the amount of precursor cooling over the EOT has been previously underestimated by ∼ 2 °C based on Mg / Ca. Finally, we present new laser-ablation data of EOT-age Turborotalia ampliapertura from St Stephens Quarry (Alabama, for which a solution ICPMS Mg / Ca record is available (Wade et al., 2012. We show that the two datasets are in excellent agreement, demonstrating that fossil solution and laser-ablation data may be directly comparable. Together with an advancing understanding of the effect of Mg / Casw, the coherent picture of the relationship between Mg / Ca and pH that we outline here represents a step towards producing
Evans, David; Wade, Bridget S.; Henehan, Michael; Erez, Jonathan; Müller, Wolfgang
2016-04-01
Much of our knowledge of past ocean temperatures comes from the foraminifera Mg / Ca palaeothermometer. Several nonthermal controls on foraminifera Mg incorporation have been identified, of which vital effects, salinity, and secular variation in seawater Mg / Ca are the most commonly considered. Ocean carbonate chemistry is also known to influence Mg / Ca, yet this is rarely examined as a source of uncertainty, either because (1) precise pH and [CO32-] reconstructions are sparse or (2) it is not clear from existing culture studies how a correction should be applied. We present new culture data of the relationship between carbonate chemistry and Mg / Ca for the surface-dwelling planktic species Globigerinoides ruber and compare our results to data compiled from existing studies. We find a coherent relationship between Mg / Ca and the carbonate system and argue that pH rather than [CO32-] is likely to be the dominant control. Applying these new calibrations to data sets for the Paleocene-Eocene Thermal Maximum (PETM) and Eocene-Oligocene transition (EOT) enables us to produce a more accurate picture of surface hydrology change for the former and a reassessment of the amount of subtropical precursor cooling for the latter. We show that pH-adjusted Mg / Ca and δ18O data sets for the PETM are within error of no salinity change and that the amount of precursor cooling over the EOT has been previously underestimated by ˜ 2 °C based on Mg / Ca. Finally, we present new laser-ablation data of EOT-age Turborotalia ampliapertura from St. Stephens Quarry (Alabama), for which a solution inductively coupled plasma mass spectrometry (ICPMS) Mg / Ca record is available (Wade et al., 2012). We show that the two data sets are in excellent agreement, demonstrating that fossil solution and laser-ablation data may be directly comparable. Together with an advancing understanding of the effect of Mg / Casw, the coherent picture of the relationship between Mg / Ca and pH that we outline
Ding, Bao-Jian; Lager, Ida; Bansal, Sunil; Durrett, Timothy P; Stymne, Sten; Löfstedt, Christer
2016-04-01
Many moth pheromones are composed of mixtures of acetates of long-chain (≥10 carbon) fatty alcohols. Moth pheromone precursors such as fatty acids and fatty alcohols can be produced in yeast by the heterologous expression of genes involved in insect pheromone production. Acetyltransferases that subsequently catalyze the formation of acetates by transfer of the acetate unit from acetyl-CoA to a fatty alcohol have been postulated in pheromone biosynthesis. However, so far no fatty alcohol acetyltransferases responsible for the production of straight chain alkyl acetate pheromone components in insects have been identified. In search for a non-insect acetyltransferase alternative, we expressed a plant-derived diacylglycerol acetyltransferase (EaDAcT) (EC 2.3.1.20) cloned from the seed of the burning bush (Euonymus alatus) in a yeast system. EaDAcT transformed various fatty alcohol insect pheromone precursors into acetates but we also found high background acetylation activities. Only one enzyme in yeast was shown to be responsible for the majority of that background activity, the acetyltransferase ATF1 (EC 2.3.1.84). We further investigated the usefulness of ATF1 for the conversion of moth pheromone alcohols into acetates in comparison with Ea DAcT. Overexpression of ATF1 revealed that it was capable of acetylating these fatty alcohols with chain lengths from 10 to 18 carbons with up to 27- and 10-fold higher in vivo and in vitro efficiency, respectively, compared to Ea DAcT. The ATF1 enzyme thus has the potential to serve as the missing enzyme in the reconstruction of the biosynthetic pathway of insect acetate pheromones from precursor fatty acids in yeast.
Efstratios I Charitos
Full Text Available OBJECTIVE: Although atrial fibrillation (AF recurrence is unpredictable in terms of onset and duration, current intermittent rhythm monitoring (IRM diagnostic modalities are short-termed and discontinuous. The aim of the present study was to investigate the necessary IRM frequency required to reliably detect recurrence of various AF recurrence patterns. METHODS: The rhythm histories of 647 patients (mean AF burden: 12 ± 22% of monitored time; 687 patient-years with implantable continuous monitoring devices were reconstructed and analyzed. With the use of computationally intensive simulation, we evaluated the necessary IRM frequency to reliably detect AF recurrence of various AF phenotypes using IRM of various durations. RESULTS: The IRM frequency required for reliable AF detection depends on the amount and temporal aggregation of the AF recurrence (p95% sensitivity of AF recurrence required higher IRM frequencies (>12 24-hour; >6 7-day; >4 14-day; >3 30-day IRM per year; p<0.0001 than currently recommended. Lower IRM frequencies will under-detect AF recurrence and introduce significant bias in the evaluation of therapeutic interventions. More frequent but of shorter duration, IRMs (24-hour are significantly more time effective (sensitivity per monitored time than a fewer number of longer IRM durations (p<0.0001. CONCLUSIONS: Reliable AF recurrence detection requires higher IRM frequencies than currently recommended. Current IRM frequency recommendations will fail to diagnose a significant proportion of patients. Shorter duration but more frequent IRM strategies are significantly more efficient than longer IRM durations. CLINICAL TRIAL REGISTRATION URL: Unique identifier: NCT00806689.
H. Ma
2010-09-01
Full Text Available Although China has surpassed the United States as the world's largest carbon dioxide emitter, in situ measurements of atmospheric CO2 have been sparse in China. This paper analyzes hourly CO2 and its correlation with CO at Miyun, a rural site near Beijing, over a period of 51 months (Dec 2004 through Feb 2009. The CO2-CO correlation analysis evaluated separately for each hour of the day provides useful information with statistical significance even in the growing season. We found that the intercept, representing the initial condition imposed by global distribution of CO2 with influence of photosynthesis and respiration, exhibits diurnal cycles differing by season. The background CO2 (CO2,b derived from Miyun observations is comparable to CO2 observed at a Mongolian background station to the northwest. Annual growth of overall mean CO2 at Miyun is estimated at 2.7 ppm yr−1 while that of CO2,b is only 1.7 ppm yr−1 similar to the mean growth rate at northern mid-latitude background stations. This suggests a relatively faster increase in the regional CO2 sources in China than the global average, consistent with bottom-up studies of CO2 emissions. For air masses with trajectories through the northern China boundary layer, mean winter CO2/CO correlation slopes (dCO2/dCO increased by 2.8 ± 0.9 ppmv/ppmv or 11% from 2005–2006 to 2007–2008, with CO2 increasing by 1.8 ppmv. The increase in dCO2/dCO indicates improvement in overall combustion efficiency over northern China after winter 2007, attributed to pollution reduction measures associated with the 2008 Beijing Olympics. The observed CO2/CO ratio at Miyun is 25% higher than the bottom-up CO2/CO emission ratio, suggesting a contribution of respired CO2 from urban residents as well as agricultural soils and livestock in the observations and uncertainty in the emission estimates.
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Design of a wind turbine rotor for maximum aerodynamic efficiency
Johansen, Jeppe; Aagaard Madsen, Helge; Gaunaa, Mac;
2009-01-01
The design of a three-bladed wind turbine rotor is described, where the main focus has been highest possible mechanical power coefficient, CP, at a single operational condition. Structural, as well as off-design, issues are not considered, leading to a purely theoretical design for investigating...... and a full three-dimensional Navier-Stokes solver. Excellent agreement is obtained using the three models. Global CP reaches a value of slightly above 0.51, while global thrust coefficient CT is 0.87. The local power coefficient Cp increases to slightly above the Betz limit on the inner part of the rotor......; the local thrust coefficient Ct increases to a value above 1.1. This agrees well with the theory of de Vries, which states that including the effect of the low pressure behind the centre of the rotor stemming from the increased rotation, both Cp and Ct will increase towards the root. Towards the tip, both...
Efficient estimation of the maximum metabolic productivity of batch systems
St. John, Peter C.; Crowley, Michael F.; Bomble, Yannick J.
2017-01-31
Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. Previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable.
Jiang, Yanan; Liu, Nannan; Guo, Wei; Xia, Fan; Jiang, Lei
2012-09-19
Integrating biological components into artificial devices establishes an interface to understand and imitate the superior functionalities of the living systems. One challenge in developing biohybrid nanosystems mimicking the gating function of the biological ion channels is to enhance the gating efficiency of the man-made systems. Herein, we demonstrate a DNA supersandwich and ATP gated nanofluidic device that exhibits high ON-OFF ratios (up to 10(6)) and a perfect electric seal at its closed state (~GΩ). The ON-OFF ratio is distinctly higher than existing chemically modified nanofluidic gating systems. The gigaohm seal is comparable with that required in ion channel electrophysiological recording and some lipid bilayer-coated nanopore sensors. The gating function is implemented by self-assembling DNA supersandwich structures into solid-state nanochannels (open-to-closed) and their disassembly through ATP-DNA binding interactions (closed-to-open). On the basis of the reversible and all-or-none electrochemical switching properties, we further achieve the IMPLICATION logic operations within the nanofluidic structures. The present biohybrid nanofluidic device translates molecular events into electrical signals and indicates a built-in signal amplification mechanism for future nanofluidic biosensing and modular DNA computing on solid-state substrates.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Roberts, Emily D.
The Marcellus Shale has become an important unconventional gas reservoir in the oil and gas industry. Fractures within this organic-rich black shale serve as an important component of porosity and permeability useful in enhancing production. Horizontal drilling is the primary approach for extracting hydrocarbons in the Marcellus Shale. Typically, wells are drilled perpendicular to natural fractures in an attempt to intersect fractures for effective hydraulic stimulation. If the fractures are contained within the shale, then hydraulic fracturing can enhance permeability by further breaking the already weakened rock. However, natural fractures can affect hydraulic stimulations by absorbing and/or redirecting the energy away from the wellbore, causing a decreased efficiency in gas recovery, as has been the case for the Clearfield County, Pennsylvania study area. Estimating appropriate distances away from faults and fractures, which may limit hydrocarbon recovery, is essential to reducing the risk of injection fluid migration along these faults. In an attempt to mitigate the negative influences of natural fractures on hydrocarbon extraction within the Marcellus Shale, fractures were analyzed through the aid of both traditional and advanced seismic attributes including variance, curvature, ant tracking, and waveform model regression. Through the integration of well log interpretations and seismic data, a detailed assessment of structural discontinuities that may decrease the recovery efficiency of hydrocarbons was conducted. High-quality 3D seismic data in Central Pennsylvania show regional folds and thrusts above the major detachment interval of the Salina Salt. In addition to the regional detachment folds and thrusts, cross-regional, northwest-trending lineaments were mapped. These lineaments may pose a threat to hydrocarbon productivity and recovery efficiency due to faults and fractures acting as paths of least resistance for induced hydraulic stimulation fluids
李勇汇; 冉兵; 朱海昱
2012-01-01
The maximum efficiency control scheme for a solid oxide fuel cell (SOFC) distributed generator (DG) in the grid-connected condition was proposed. By introducing the steady-state equations which govern the complex electrochemistry, thennodynamic and electrical processes of the SOFC DG, the relationship between the AC and DC sides of the SOFC DG was established. Analyses indicate that the control variables of the power conditioning unit are dependant of the control variables of the cell stack if the constant unity power factor operating scheme for the SOFC DG is chosen. However, the operating states of the SOFC DG under this control scheme must be subjected to the operating constraints denoted as feasible operating space (FOS). The non-linear programming method was then used to determine the maximum efficiency and the optimal control variables. Simulation results show that the SOFC DG under the maximum efficiency should maintain three DC-side operating variables constant simultaneously, namely, fuel utilization factor, excess oxygen ratio and stack operating temperature.%提出了一种固体氧化物燃料电池(solid oxide fuel cell,SOFC)分布式电源(distributed generator,DG)以最大效率并网发电的控制策略.通过引入反映内部复杂电化学、热力学和电气过程的稳态方程,建立了SOFC分布式电源交、直流两侧的联系.分析表明,SOFC分布式电源在采用恒功率因素运行方式时其功率调节单元的2个控制变量取决于电池堆的2个变量.然而,这种运行方式必须满足SOFC分布式电源的运行状态限制在被定义为合理运行空间(feasible operating space,FOS)的范围内.非线性规划方法用来计算SOFC分布式电源的最大效率和最优控制变量.仿真表明,SOFC分布式电源以最大效率发电时其直流侧氢气利用系数、过量氧气比例和电池堆温度这3个运行变量必须保持恒定.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Quantum-dot Carnot engine at maximum power.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-04-01
We evaluate the efficiency at maximum power of a quantum-dot Carnot heat engine. The universal values of the coefficients at the linear and quadratic order in the temperature gradient are reproduced. Curzon-Ahlborn efficiency is recovered in the limit of weak dissipation.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Maximum super angle optimization method for array antenna pattern synthesis
Wu, Ji; Roederer, A. G
1991-01-01
Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 20...
Valéria Pacheco Batista Euclides
2007-09-01
Full Text Available O objetivo deste trabalho foi avaliar o ganho de peso vivo, a capacidade de suporte e a eficiência bioeconômica em pastos de Panicum maximum, cultivar Tanzânia, com aplicação de uma segunda dose de adubação nitrogenada no final do verão. Anualmente foram aplicados em cobertura: 50, 17,48, e 33,2 kg ha-1 de N, P e K, respectivamente, em novembro. A metade da área recebeu 50 kg ha-1 de N adicional em março. Os tratamentos foram pastos de capim-tanzânia com 50 e 100 kg ha-1 de N. Os piquetes foram submetidos ao pastejo rotacionado. Foram utilizados quatro animais por piquete, e animais adicionais foram colocados e removidos para manter resíduos semelhantes pós-pastejo. Não houve efeito da adubação nitrogenada sobre o ganho médio diário. No entanto, o pasto adubado com 100 kg ha-1 de N (1,8 UA ha-1 resultou em maior capacidade de suporte e maior produtividade (780 kg ha-1 por ano de PV do que o adubado com 50 kg ha-1 de N (1,5 UA ha-1 e com 690 kg ha-1 por ano de PV, em média. A eficiência da conversão do N em produto animal foi de 1,8 kg de PV por hectare para cada quilograma adicional de N aplicado. O uso da adubação nitrogenada no final do verão é uma alternativa bioeconomicamente viável para a produção sustentável de carne.The objective of the work was to estimate animal live weight gain, the pasture carrying capacity, and the bioeconomic efficiency of Panicum maximum, cultivar Tanzânia pastures, with a second application of nitrogen fertilizer in the end of summer (March. Maintenance fertilizer was 50, 17.5 and 33.2 kg ha-1 of N, P and K, respectively, applied annually in November. Besides, in half of the area, an additional 50 kg ha-1 of N was applied in March. Treatments were tanzânia pastures with two levels of nitrogen fertilization, 50 and 100 kg ha-1. The paddocks were submitted to a rotational grazing. Four steers were kept in each paddock, and additional steers were allocated and removed to assure similar
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Liu, M.; Malek, K.; Adam, J. C.; Stockle, C. O.; Rajagopalan, K.; Nelson, R.
2014-12-01
As water is the primary resource limitation for cropping systems over the inland Pacific Northwest (PNW), water use efficiency impacts regional water availability, crop yields, and net carbon sequestration. Furthermore, nitrogen (N) use efficiency affects the cost of farming and the total N flux to the environment (including leaching to aquatic ecosystems and greenhouse gas emissions to the atmosphere). Climate change affects water and nitrogen use efficiencies due to the combined effects of warming (reducing snowpack water storage, increasing ET, earlier leaf-on, shortening or lengthening plant growth season, etc.), the CO2 fertilization effects (increasing net primary productivity and leaf-level water and energy use efficiencies for C3 crops), and extreme climate events (drought and flood). Cropland conservation management (rotation, tillage, irrigation, and fertilization) is widely practiced in this region for maintaining high productivity of agricultural lands. To reduce vulnerability to weather extremes and long-term climate change, management regimes will likely need to be adapted for a changing environment. Here, we applied the coupled macro-scale hydrologic and crop growth model (VIC-CropSyst) to study how climate change in the 21st century will change water and nitrogen use efficiencies over the PNW. Simulation experiments with different combinations of management options and climate scenarios are used for attributing effects of climate factors and management options on long-term trends and fluctuations on water and nitrogen use efficiency. Preliminary simulation results indicate that there is a trend of decreasing water and nitrogen use efficiency over the inner PNW domain during the 21th century because of increasing ET, a seasonal shift in water availability, and the intensification of extreme climate events. Effective managements, including no-tillage and conservational tillage and optimized irrigation can eliminate the decrease or even increase water
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Parasitoids often are selected for use as biological control agents because of their high host specificity, yet such host specificity can result in strong interspecific competition. However, few studies have examined if and how various extrinsic factors (such as parasitism efficiency) influence the ...
Ozone production efficiency (OPE) can be defined as the number of ozone (O3) molecules photochemically produced by a molecule of NOx (NO + NO2) before it is lost from the NOx - O3 cycle. Here, we consider observational and modeling techniques to evaluate various operational defi...
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Yun, Anthony J; Lee, Patrick Y; Doux, John D
2006-01-01
A network constitutes an abstract description of the relationships among entities, respectively termed links and nodes. If a power law describes the probability distribution of the number of links per node, the network is said to be scale-free. Scale-free networks feature link clustering around certain hubs based on preferential attachments that emerge due either to merit or legacy. Biologic systems ranging from sub-atomic to ecosystems represent scale-free networks in which energy efficiency forms the basis of preferential attachments. This paradigm engenders a novel scale-free network theory of evolution based on energy efficiency. As environmental flux induces fitness dislocations and compels a new meritocracy, new merit-based hubs emerge, previously merit-based hubs become legacy hubs, and network recalibration occurs to achieve system optimization. To date, Darwinian evolution, characterized by innovation sampling, variation, and selection through filtered termination, has enabled biologic progress through optimization of energy efficiency. However, as humans remodel their environment, increasing the level of unanticipated fitness dislocations and inducing evolutionary stress, the tendency of networks to exhibit inertia and retain legacy hubs engender maladaptations. Many modern diseases may fundamentally derive from these evolutionary displacements. Death itself may constitute a programmed adaptation, terminating individuals who represent legacy hubs and recalibrating the network. As memes replace genes as the basis of innovation, death itself has become a legacy hub. Post-Darwinian evolution may favor indefinite persistence to optimize energy efficiency. We describe strategies to reprogram or decommission legacy hubs that participate in human disease and death.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW
I. Elzein
2015-01-01
Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.
Bao-Rong Lu; Xingxing Cai; Xin Jin
2009-01-01
An efficient molecular method for the accurate and efficient identification of indica and japonica rice was created based on the poly-morphisms of insertion/deletion (InDel) DNA fragments obtained from the basic local alignment search tool (BLAST) to the entire genomic sequences of indica (93-11) and japonica rice (Nipponbare). The 45 InDel loci were validated experimentally by the polymerase chain reaction (PCR) and polyacrylamide gel electrophoresis (PAGE) in 44 typical indica and japonica rice varieties, including 93-11 and Nipponbare. A neutrality test of the data matrix generated from electrophoretic banding patterns of various InDel loci indicated that 34 InDel loci were strongly associated with the differentiation of indica and japonica rice. More extensive analyses involving cultivated rice varieties from 11 Asian countries, and 12 wild Oryza species with various origins confirmed that indica and japonica characteristics could accurately be determined via calculating the average frequency of indica- or japonica-specific alleles on different InDel loci across the rice genome. This method was named as the "InDel molecular index" that combines molecular and statistical methods in determining the indica and japonica characteristics of rice varieties. Compared with the traditional methods based essentially on morphology, the InDel molecular index provides a very accurate, rapid, simple, and efficient method for identifying indica and japonica rice. In addition, the InDel index can be used to determine indica or japonica characteristics of wild Oryza species, which largely extends the utility of this method. The InDel molecular index provides a new tool for the effective selection of appropriate indica or japonica rice germplasm in rice breeding. It also offers a novel model for the study of the origin, evolution, and genetic differentiation of indica and japonica rice adapted to various environmental changes.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
N. Ranjbar
2016-09-01
Full Text Available Knowledge of species’ habitat needs is considered as one of the requirements of wildlife management. We studied seasonal habitat suitability and habitat associations of wild goat (Capra aegagrus in Kolah-Qazi National Park, one of its typical habitats in central Asia, using Maximum Entropy approach. The study area was confined to mountainous areas as the potential habitat of the wild goat. Elevation, distance to water sources, distance to human settlements, and distance to guard patrol roads were recognised as the most important variables determining habitat suitability of the species. The extent of suitable habitats was maximum in spring (3882.25 ha and the least in summer (1362.5 ha. The AUC values of MaxEnt revealed acceptable to good efficiency (AUC ≥0.7. The obtained results may have implications for conservation of the wild goat in similar habitats across its distribution range.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Turner, Eve; Hawkins, Peter
2016-01-01
.... This article presents the results and implications of an international study which explored its use in executive and business coaching, with the aim of sharing best practice and achieving maximum...
Hierarchical Maximum Margin Learning for Multi-Class Classification
Yang, Jian-Bo
2012-01-01
Due to myriads of classes, designing accurate and efficient classifiers becomes very challenging for multi-class classification. Recent research has shown that class structure learning can greatly facilitate multi-class learning. In this paper, we propose a novel method to learn the class structure for multi-class classification problems. The class structure is assumed to be a binary hierarchical tree. To learn such a tree, we propose a maximum separating margin method to determine the child nodes of any internal node. The proposed method ensures that two classgroups represented by any two sibling nodes are most separable. In the experiments, we evaluate the accuracy and efficiency of the proposed method over other multi-class classification methods on real world large-scale problems. The results show that the proposed method outperforms benchmark methods in terms of accuracy for most datasets and performs comparably with other class structure learning methods in terms of efficiency for all datasets.
Proffitt, Charles R.; Lennon, Daniel J.; Langer, Norbert; Brott, Ines
2016-06-01
Spectra from the Hubble Space Telescope Cosmic Origins Spectrograph and the Space Telescope Imaging Spectrograph covering the B iii resonance line have been obtained for 10 early-B stars near the turnoff of the young Galactic open cluster NGC 3293. This is the first sample of boron abundance determinations in a single, clearly defined population of early-B stars that also covers a substantial range of projected rotational velocities. In most of these stars we detect partial depletion of boron at a level consistent with that expected for rotational mixing in single stars, but inconsistent with expectations for depletion from close binary evolution. However, our results do suggest that the efficiency of rotational mixing is at or slightly below the low end of the range predicted by the available theoretical calculations. The two most luminous targets observed have a very large boron depletion and may be the products of either binary interactions or post-main-sequence evolution. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with proposal GO-12520.
Knauer, Jürgen; Zaehle, Sönke; Reichstein, Markus; Medlyn, Belinda E; Forkel, Matthias; Hagemann, Stefan; Werner, Christiane
2017-03-01
Ecosystem water-use efficiency (WUE) is an important metric linking the global land carbon and water cycles. Eddy covariance-based estimates of WUE in temperate/boreal forests have recently been found to show a strong and unexpected increase over the 1992-2010 period, which has been attributed to the effects of rising atmospheric CO2 concentrations on plant physiology. To test this hypothesis, we forced the observed trend in the process-based land surface model JSBACH by increasing the sensitivity of stomatal conductance (gs ) to atmospheric CO2 concentration. We compared the simulated continental discharge, evapotranspiration (ET), and the seasonal CO2 exchange with observations across the extratropical northern hemisphere. The increased simulated WUE led to substantial changes in surface hydrology at the continental scale, including a significant decrease in ET and a significant increase in continental runoff, both of which are inconsistent with large-scale observations. The simulated seasonal amplitude of atmospheric CO2 decreased over time, in contrast to the observed upward trend across ground-based measurement sites. Our results provide strong indications that the recent, large-scale WUE trend is considerably smaller than that estimated for these forest ecosystems. They emphasize the decreasing CO2 sensitivity of WUE with increasing scale, which affects the physiological interpretation of changes in ecosystem WUE.
Cano, F Javier; López, Rosana; Warren, Charles R
2014-11-01
Water stress (WS) slows growth and photosynthesis (A(n)), but most knowledge comes from short-time studies that do not account for longer term acclimation processes that are especially relevant in tree species. Using two Eucalyptus species that contrast in drought tolerance, we induced moderate and severe water deficits by withholding water until stomatal conductance (g(sw)) decreased to two pre-defined values for 24 d, WS was maintained at the target g(sw) for 29 d and then plants were re-watered. Additionally, we developed new equations to simulate the effect on mesophyll conductance (g(m)) of accounting for the resistance to refixation of CO(2). The diffusive limitations to CO(2), dominated by the stomata, were the most important constraints to A(n). Full recovery of A(n) was reached after re-watering, characterized by quick recovery of gm and even higher biochemical capacity, in contrast to the slower recovery of g(sw). The acclimation to long-term WS led to decreased mesophyll and biochemical limitations, in contrast to studies in which stress was imposed more rapidly. Finally, we provide evidence that higher gm under WS contributes to higher intrinsic water-use efficiency (iWUE) and reduces the leaf oxidative stress, highlighting the importance of gm as a target for breeding/genetic engineering.
A viable method for goodness-of-fit test in maximum likelihood fit
张锋; 高原宁; 霍雷
2011-01-01
A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood func
The Betz-Joukowsky limit for the maximum power coefficient of wind turbines
Okulov, Valery; van Kuik, G.A.M.
2009-01-01
The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
An improved maximum power point tracking method for photovoltaic systems
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
Maximum Work of Free-Piston Stirling Engine Generators
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
A viable method for goodness-of-fit test in maximum likelihood fit
ZHANG Feng; GAO Yuan-Ning; HUO Lei
2011-01-01
A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Carter, Holly; Drury, John; Amlôt, Richard; Rubin, G James; Williams, Richard
2014-01-01
The risk of incidents involving mass decontamination in response to a chemical, biological, radiological, or nuclear release has increased in recent years, due to technological advances, and the willingness of terrorists to use unconventional weapons. Planning for such incidents has focused on the technical issues involved, rather than on psychosocial concerns. This paper presents a novel experimental study, examining the effect of three different responder communication strategies on public experiences and behaviour during a mass decontamination field experiment. Specifically, the research examined the impact of social identity processes on the relationship between effective responder communication, and relevant outcome variables (e.g. public compliance, public anxiety, and co-operative public behaviour). All participants (n = 111) were asked to visualise that they had been involved in an incident involving mass decontamination, before undergoing the decontamination process, and receiving one of three different communication strategies: 1) 'Theory-based communication': Health-focused explanations about decontamination, and sufficient practical information; 2) 'Standard practice communication': No health-focused explanations about decontamination, sufficient practical information; 3) 'Brief communication': No health-focused explanations about decontamination, insufficient practical information. Four types of data were collected: timings of the decontamination process; observational data; and quantitative and qualitative self-report data. The communication strategy which resulted in the most efficient progression of participants through the decontamination process, as well as the fewest observations of non-compliance and confusion, was that which included both health-focused explanations about decontamination and sufficient practical information. Further, this strategy resulted in increased perceptions of responder legitimacy and increased identification with
Fernández-Balbuena, Sonia; Hoyos, Juan; Rosales-Statkus, María Elena; Nardone, Anthony; Vallejo, Fernando; Ruiz, Mónica; Sánchez, Romina; Belza, María José; Indave, Blanca Iciar; Gutiérrez, Jorge; Álvarez, Jorge; Sordo, Luis
2016-01-01
Sexually transmitted infections (STIs) are recognized as one of the conditions in which HIV testing is most clearly indicated. We analyse whether people diagnosed with an STI are being tested for HIV according to the experience of participants in an outreach rapid testing programme in Spain. Between 2008 and 2010, 6293 individuals underwent rapid testing and completed a self-administered questionnaire. We calculated the percentage of individuals that were diagnosed with an STI in the last 5 years and identified the setting where the last episode occurred. We then determined the percentage not receiving an HIV test after the last STI diagnosis and estimated the associated factors. Overall, 17.3% (N = 959) of participants reported an STI diagnosis in the last 5 years, of which 81.5% occurred in general medical settings. Sixty-one percent reported not undergoing HIV testing after their last STI diagnosis, 2.2% of whom reported they had refused the test. Not receiving an HIV test after the last STI diagnosis was independently associated with not being a man who has sex with men (MSM), having had fewer sexual partners, being diagnosed in general medical settings and having received a diagnosis other than syphilis. An unacceptably large percentage of people diagnosed with STI are not being tested for HIV because healthcare providers frequently fail to offer the test. Offering routine HIV testing at general medical settings, regardless of the type of STI diagnosed and population group, should be a high priority and is probably a more efficient strategy than universal screening in general healthcare settings.
Brian eGodman
2014-06-01
Full Text Available Introduction: The appreciable growth in pharmaceutical expenditure has resulted in multiple initiatives across Europe to lower generic prices and enhance their utilisation. However, considerable variation in their use and prices. Objective: Assess the influence of multiple supply and demand-side initiatives across Europe for established medicines to enhance prescribing efficiency before a decision to prescribe a particular medicine. Subsequently utilise the findings to suggest potential future initiatives that countries could consider. Method: Analysis of different methodologies involving cross national and single country retrospective observational studies on reimbursed use and expenditure of PPIs, statins and renin-angiotensin inhibitor drugs among European countries. Results: Nature and intensity of the various initiatives appreciably influenced prescribing behaviour and expenditure, e.g. multiple measures resulted in reimbursed expenditure for PPIs in Scotland in 2010 56% below 2001 levels despite a 3 fold increase in utilisation and in the Netherlands, PPI expenditure fell by 58% in 2010 vs. 2000 despite a 3-fold increase in utilisation. A similarly picture was seen with prescribing restrictions, i.e. (i more aggressive follow-up of prescribing restrictions for patented statins and ARBs resulted in a greater reduction in the utilisation of patented statins in Austria vs. Norway and lower utilisation of patented ARBs vs. generic ACEIs in Croatia than Austria. However, limited impact of restrictions on esomeprazole in Norway with the first prescription or recommendation in hospital where restrictions do not apply. Similar findings when generic losartan became available in Western Europe. Conclusions: Multiple demand-side measures are needed to influence prescribing patterns. When combined with supply-side measures, activities can realise appreciable savings. Health authorities cannot rely on a ‘spill over’ effect between classes to affect
Carter, Holly; Drury, John; Amlôt, Richard; Rubin, G. James; Williams, Richard
2014-01-01
The risk of incidents involving mass decontamination in response to a chemical, biological, radiological, or nuclear release has increased in recent years, due to technological advances, and the willingness of terrorists to use unconventional weapons. Planning for such incidents has focused on the technical issues involved, rather than on psychosocial concerns. This paper presents a novel experimental study, examining the effect of three different responder communication strategies on public experiences and behaviour during a mass decontamination field experiment. Specifically, the research examined the impact of social identity processes on the relationship between effective responder communication, and relevant outcome variables (e.g. public compliance, public anxiety, and co-operative public behaviour). All participants (n = 111) were asked to visualise that they had been involved in an incident involving mass decontamination, before undergoing the decontamination process, and receiving one of three different communication strategies: 1) ‘Theory-based communication’: Health-focused explanations about decontamination, and sufficient practical information; 2) ‘Standard practice communication’: No health-focused explanations about decontamination, sufficient practical information; 3) ‘Brief communication’: No health-focused explanations about decontamination, insufficient practical information. Four types of data were collected: timings of the decontamination process; observational data; and quantitative and qualitative self-report data. The communication strategy which resulted in the most efficient progression of participants through the decontamination process, as well as the fewest observations of non-compliance and confusion, was that which included both health-focused explanations about decontamination and sufficient practical information. Further, this strategy resulted in increased perceptions of responder legitimacy and increased
Godman, Brian; Wettermark, Bjorn; van Woerkom, Menno; Fraeyman, Jessica; Alvarez-Madrazo, Samantha; Berg, Christian; Bishop, Iain; Bucsics, Anna; Campbell, Stephen; Finlayson, Alexander E.; Fürst, Jurij; Garuoliene, Kristina; Herholz, Harald; Kalaba, Marija; Laius, Ott; Piessnegger, Jutta; Sermet, Catherine; Schwabe, Ulrich; Vlahović-Palčevski, Vera V.; Markovic-Pekovic, Vanda; Vončina, Luka; Malinowska, Kamila; Zara, Corinne; Gustafsson, Lars L.
2014-01-01
Introduction: The appreciable growth in pharmaceutical expenditure has resulted in multiple initiatives across Europe to lower generic prices and enhance their utilization. However, considerable variation in their use and prices. Objective: Assess the influence of multiple supply and demand-side initiatives across Europe for established medicines to enhance prescribing efficiency before a decision to prescribe a particular medicine. Subsequently utilize the findings to suggest potential future initiatives that countries could consider. Method: An analysis of different methodologies involving cross national and single country retrospective observational studies on reimbursed use and expenditure of PPIs, statins, and renin-angiotensin inhibitor drugs among European countries. Results: Nature and intensity of the various initiatives appreciably influenced prescribing behavior and expenditure, e.g., multiple measures resulted in reimbursed expenditure for PPIs in Scotland in 2010 56% below 2001 levels despite a 3-fold increase in utilization and in the Netherlands, PPI expenditure fell by 58% in 2010 vs. 2000 despite a 3-fold increase in utilization. A similar picture was seen with prescribing restrictions, i.e., (i) more aggressive follow-up of prescribing restrictions for patented statins and ARBs resulted in a greater reduction in the utilization of patented statins in Austria vs. Norway and lower utilization of patented ARBs vs. generic ACEIs in Croatia than Austria. However, limited impact of restrictions on esomeprazole in Norway with the first prescription or recommendation in hospital where restrictions do not apply. Similar findings when generic losartan became available in Western Europe. Conclusions: Multiple demand-side measures are needed to influence prescribing patterns. When combined with supply-side measures, activities can realize appreciable savings. Health authorities cannot rely on a “spill over” effect between classes to affect changes in
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Maximum-power quantum-mechanical Carnot engine.
Abe, Sumiyoshi
2011-04-01
In their work [J. Phys. A 33, 4427 (2000)], Bender, Brody, and Meister have shown by employing a two-state model of a particle confined in the one-dimensional infinite potential well that it is possible to construct a quantum-mechanical analog of the Carnot engine through changes of both the width of the well and the quantum state in a specific manner. Here, a discussion is developed about realizing the maximum power of such an engine, where the width of the well moves at low but finite speed. The efficiency of the engine at the maximum power output is found to be universal independently of any of the parameters contained in the model.
Prediction of Double Layer Grids' Maximum Deflection Using Neural Networks
Reza K. Moghadas
2008-01-01
Full Text Available Efficient neural networks models are trained to predict the maximum deflection of two-way on two-way grids with variable geometrical parameters (span and height as well as cross-sectional areas of the element groups. Backpropagation (BP and Radial Basis Function (RBF neural networks are employed for the mentioned purpose. The inputs of the neural networks are the length of the spans, L, the height, h and cross-sectional areas of the all groups, A and the outputs are maximum deflections of the corresponding double layer grids, respectively. The numerical results indicate that the RBF neural network is better than BP in terms of training time and performance generality.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Park, Hyunbin; Sim, Minseob; Kim, Shiho
2015-06-01
We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.
Malone, Stephen M.; McGue, Matt; Iacono, William G.
2010-01-01
Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…
Attainability of Carnot efficiency with autonomous engines.
Shiraishi, Naoto
2015-11-01
The maximum efficiency of autonomous engines with a finite chemical potential difference is investigated. We show that, without a particular type of singularity, autonomous engines cannot attain the Carnot efficiency. This singularity is realized in two ways: single particle transports and the thermodynamic limit. We demonstrate that both of these ways actually lead to the Carnot efficiency in concrete setups. Our results clearly illustrate that the singularity plays a crucial role in the maximum efficiency of autonomous engines.
Attainability of Carnot efficiency with autonomous engines
Shiraishi, Naoto
2015-11-01
The maximum efficiency of autonomous engines with a finite chemical potential difference is investigated. We show that, without a particular type of singularity, autonomous engines cannot attain the Carnot efficiency. This singularity is realized in two ways: single particle transports and the thermodynamic limit. We demonstrate that both of these ways actually lead to the Carnot efficiency in concrete setups. Our results clearly illustrate that the singularity plays a crucial role in the maximum efficiency of autonomous engines.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Adaptive edge image enhancement based on maximum fuzzy entropy
ZHANG Xiu-hua; YANG Kun-tao
2006-01-01
Based on the maximum fuzzy entropy principle,the edge image with low contrast is optimally classified into two classes adaptively,under the condition of probability partition and fuzzy partition.The optimal threshold is used as the classified threshold value,and a local parametric gray-level transformation is applied to the obtained classes.By means of two parameters representing,the homogeneity of the regions in edge image is improved.The excellent performance of the proposed technique is exercisable through simulation results on a set of test images.It is shown how the extracted and enhanced edges provide an efficient edge-representation of images.It is shown that the proposed technique possesses excellent performance in homogeneity through simulations on a set of test images,and the extracted and enhanced edges provide an efficient edge-representation of images.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Meng-Hui Wang
2015-08-01
Full Text Available Sliding mode strategy (SMS for maximum power point tracking (MPPT is used in this study of a human power generation system. This approach ensures maximum power at different rotation speeds to increase efficiency and corrects for the lack of robustness in traditional methods. The intelligent extension theory is used to reduce input saturation and high frequency switching in sliding mode strategy, as well as to increase the efficiency and response speed. The experimental results show that the efficiency of the extension SMS (ESMS is 5% higher than in traditional SMS, and the response is 0.5 s faster.
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
Exact parallel maximum clique algorithm for general and protein graphs.
Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka
2013-09-23
A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems
Modestas Pikutis
2014-05-01
Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Acoustic space dimensionality selection and combination using the maximum entropy principle
Abdel-Haleem, Yasser H.; Renals, Steve; Lawrence, Neil D.
2004-01-01
In this paper we propose a discriminative approach to acoustic space dimensionality selection based on maximum entropy modelling. We form a set of constraints by composing the acoustic space with the space of phone classes, and use a continuous feature formulation of maximum entropy modelling to select an optimal feature set. The suggested approach has two steps: (1) the selection of the best acoustic space that efficiently and economically represents the acoustic data and its variability;...
2011-01-01
Single-stage grid-connected Photovoltaic (PV) systems have advantages such as simple topology, high efficiency, etc. However, since all the control objectives such as the maximum power point tracking (with the utility voltage, and harmonics reduction for output current need to be considered simultaneously, the complexity of the control scheme is much increased. In this paper a new type of grid connected photovoltaic (PV) system with Maximum Power Point Tracking (MPPT) and reactive power simul...
Maximum energy output of a DFIG wind turbine using an improved MPPT-curve method
Dinh-Chung Phan; Shigeru Yamamoto
2015-01-01
A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG) wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based o...
Organizing a Practice Session for Maximum Effectiveness
DeGroot, Joanna
2009-01-01
According to Jason Paulk, director of choral activities at Eastern New Mexico University, progress is made during those in-between times and that progress magnifies with efficient time spent alone. Paulk is a firm believer in the importance of singers organizing their practice sessions, and he details some effective organization methods, including…
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
On the Performance of Maximum Likelihood Inverse Reinforcement Learning
Ratia, Héctor; Martinez-Cantin, Ruben
2012-01-01
Inverse reinforcement learning (IRL) addresses the problem of recovering a task description given a demonstration of the optimal policy used to solve such a task. The optimal policy is usually provided by an expert or teacher, making IRL specially suitable for the problem of apprenticeship learning. The task description is encoded in the form of a reward function of a Markov decision process (MDP). Several algorithms have been proposed to find the reward function corresponding to a set of demonstrations. One of the algorithms that has provided best results in different applications is a gradient method to optimize a policy squared error criterion. On a parallel line of research, other authors have presented recently a gradient approximation of the maximum likelihood estimate of the reward signal. In general, both approaches approximate the gradient estimate and the criteria at different stages to make the algorithm tractable and efficient. In this work, we provide a detailed description of the different metho...
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Small scale wind energy harvesting with maximum power tracking
Joaquim Azevedo
2015-07-01
Full Text Available It is well-known that energy harvesting from wind can be used to power remote monitoring systems. There are several studies that use wind energy in small-scale systems, mainly with wind turbine vertical axis. However, there are very few studies with actual implementations of small wind turbines. This paper compares the performance of horizontal and vertical axis wind turbines for energy harvesting on wireless sensor network applications. The problem with the use of wind energy is that most of the time the wind speed is very low, especially at urban areas. Therefore, this work includes a study on the wind speed distribution in an urban environment and proposes a controller to maximize the energy transfer to the storage systems. The generated power is evaluated by simulation and experimentally for different load and wind conditions. The results demonstrate the increase in efficiency of wind generators that use maximum power transfer tracking, even at low wind speeds.
A flexible annealing chaotic neural network to maximum clique problem.
Yang, Gang; Tang, Zheng; Zhang, Zhiqiang; Zhu, Yunyi
2007-06-01
Based on the analysis and comparison of several annealing strategies, we present a flexible annealing chaotic neural network which has flexible controlling ability and quick convergence rate to optimization problem. The proposed network has rich and adjustable chaotic dynamics at the beginning, and then can converge quickly to stable states. We test the network on the maximum clique problem by some graphs of the DIMACS clique instances, p-random and k random graphs. The simulations show that the flexible annealing chaotic neural network can get satisfactory solutions at very little time and few steps. The comparison between our proposed network and other chaotic neural networks denotes that the proposed network has superior executive efficiency and better ability to get optimal or near-optimal solution.
Gaussian maximum likelihood and contextual classification algorithms for multicrop classification
Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.
1987-01-01
The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.
Triadic Conceptual Structure of the Maximum Entropy Approach to Evolution
Herrmann-Pillath, Carsten
2010-01-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution. Following recent contributions to the naturalization of Peircean semiosis, we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference device...
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Generalized degeneracy, dynamic monopolies and maximum degenerate subgraphs
Zaker, Manouchehr
2012-01-01
A graph $G$ is said to be a $k$-degenerate graph if any subgraph of $G$ contains a vertex of degree at most $k$. Let $\\kappa$ be any non-negative function on the vertex set of $G$. We first define a $\\kappa$-degenerate graph. Next we give an efficient algorithm to determine whether a graph is $\\kappa$-degenerate. We revisit the concept of dynamic monopolies in graphs. The latter notion is used in formulation and analysis of spread of influence such as disease or opinion in social networks. We consider dynamic monopolies with (not necessarily positive) but integral threshold assignments. We obtain a sufficient and necessary relationship between dynamic monopolies and generalized degeneracy. As applications of the previous results we consider the problem of determining the maximum size of $\\kappa$-degenerate (or $k$-degenerate) induced subgraphs in any graph. We obtain some upper and lower bounds for the maximum size of any $\\kappa$-degenerate induced subgraph in general and regular graphs. All of our bounds ar...
Radiation engineering of optical antennas for maximum field enhancement.
Seok, Tae Joon; Jamshidi, Arash; Kim, Myungki; Dhuey, Scott; Lakhani, Amit; Choo, Hyuck; Schuck, Peter James; Cabrini, Stefano; Schwartzberg, Adam M; Bokor, Jeffrey; Yablonovitch, Eli; Wu, Ming C
2011-07-13
Optical antennas have generated much interest in recent years due to their ability to focus optical energy beyond the diffraction limit, benefiting a broad range of applications such as sensitive photodetection, magnetic storage, and surface-enhanced Raman spectroscopy. To achieve the maximum field enhancement for an optical antenna, parameters such as the antenna dimensions, loading conditions, and coupling efficiency have been previously studied. Here, we present a framework, based on coupled-mode theory, to achieve maximum field enhancement in optical antennas through optimization of optical antennas' radiation characteristics. We demonstrate that the optimum condition is achieved when the radiation quality factor (Q(rad)) of optical antennas is matched to their absorption quality factor (Q(abs)). We achieve this condition experimentally by fabricating the optical antennas on a dielectric (SiO(2)) coated ground plane (metal substrate) and controlling the antenna radiation through optimizing the dielectric thickness. The dielectric thickness at which the matching condition occurs is approximately half of the quarter-wavelength thickness, typically used to achieve constructive interference, and leads to ∼20% higher field enhancement relative to a quarter-wavelength thick dielectric layer.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Xi Liu
2016-09-01
Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
A maximum power point tracking algorithm for photovoltaic applications
Nelatury, Sudarshan R.; Gray, Robert
2013-05-01
The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Efficiency Evaluation of Energy Systems
Kanoğlu, Mehmet; Dinçer, İbrahim
2012-01-01
Efficiency is one of the most frequently used terms in thermodynamics, and it indicates how well an energy conversion or process is accomplished. Efficiency is also one of the most frequently misused terms in thermodynamics and is often a source of misunderstanding. This is because efficiency is often used without being properly defined first. This book intends to provide a comprehensive evaluation of various efficiencies used for energy transfer and conversion systems including steady-flow energy devices (turbines, compressors, pumps, nozzles, heat exchangers, etc.), various power plants, cogeneration plants, and refrigeration systems. The book will cover first-law (energy based) and second-law (exergy based) efficiencies and provide a comprehensive understanding of their implications. It will help minimize the widespread misuse of efficiencies among students and researchers in energy field by using an intuitive and unified approach for defining efficiencies. The book will be particularly useful for a clear ...
A hybrid solar panel maximum power point search method that uses light and temperature sensors
Ostrowski, Mariusz
2016-04-01
Solar cells have low efficiency and non-linear characteristics. To increase the output power solar cells are connected in more complex structures. Solar panels consist of series of connected solar cells with a few bypass diodes, to avoid negative effects of partial shading conditions. Solar panels are connected to special device named the maximum power point tracker. This device adapt output power from solar panels to load requirements and have also build in a special algorithm to track the maximum power point of solar panels. Bypass diodes may cause appearance of local maxima on power-voltage curve when the panel surface is illuminated irregularly. In this case traditional maximum power point tracking algorithms can find only a local maximum power point. In this article the hybrid maximum power point search algorithm is presented. The main goal of the proposed method is a combination of two algorithms: a method that use temperature sensors to track maximum power point in partial shading conditions and a method that use illumination sensor to track maximum power point in equal illumination conditions. In comparison to another methods, the proposed algorithm uses correlation functions to determinate the relationship between values of illumination and temperature sensors and the corresponding values of current and voltage in maximum power point. In partial shading condition the algorithm calculates local maximum power points bases on the value of temperature and the correlation function and after that measures the value of power on each of calculated point choose those with have biggest value, and on its base run the perturb and observe search algorithm. In case of equal illumination algorithm calculate the maximum power point bases on the illumination value and the correlation function and on its base run the perturb and observe algorithm. In addition, the proposed method uses a special coefficient modification of correlation functions algorithm. This sub
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
Maximum SINR Synchronization Strategies in Multiuser Filter Bank Schemes
Pecile Francesco
2010-01-01
Full Text Available We consider synchronization in a multiuser filter bank uplink system with single-user detection. Perfect user synchronization is not the optimal choice as the intuition would suggest. To maximize performance the synchronization parameters have to be chosen to maximize the signal-to-interference-plus-noise ratio (SINR at each equalizer subchannel output. However, the resulting filter bank receiver structure becomes complex. Therefore, we consider two simplified synchronization metrics that are based on the maximization of the average SINR of a given user or the aggregate SINR of all users. Furthermore, a relaxation of the aggregate SINR metric allows implementing an efficient multiuser analysis filter bank. This receiver deploys two fractionally spaced analysis stages. Each analysis stage is efficiently implemented via a polyphase filter bank, followed by an extended discrete Fourier transform that allows the user frequency offsets to be partly compensated. Then, sub-channel maximum SINR equalization is used. We discuss the application of the proposed solution to Orthogonal Frequency Division Multiple Access (OFDMA and multiuser Filtered Multitone (FMT systems.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mohsen Taherbaneh; A. H. Rezaie; H. Ghafoorifard; Rahimi, K; M. B. Menhaj
2010-01-01
In applications with low-energy conversion efficiency, maximizing the output power improves the efficiency. The maximum output power of a solar panel depends on the environmental conditions and load profile. In this paper, a method based on simultaneous use of two fuzzy controllers is developed in order to maximize the generated output power of a solar panel in a photovoltaic system: fuzzy-based sun tracking and maximum power point tracking. The sun tracking is performed by changing the solar...
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Matsumoto, Atsushi; Hasegawa, Masaru; Matsui, Keiju
In this paper, a novel position sensorless control method for interior permanent magnet synchronous motors (IPMSMs) that is based on a novel flux model suitable for maximum torque control has been proposed. Maximum torque per ampere (MTPA) control is often utilized for driving IPMSMs with the maximum efficiency. In order to implement this control, generally, the parameters are required to be accurate. However, the inductance varies dramatically because of magnetic saturation, which has been one of the most important problems in recent years. Therefore, the conventional MTPA control method fails to achieve maximum efficiency for IPMSMs because of parameter mismatches. In this paper, first, a novel flux model has been proposed for realizing the position sensorless control of IPMSMs, which is insensitive to Lq. In addition, in this paper, it has been shown that the proposed flux model can approximately estimate the maximum torque control (MTC) frame, which as a new coordinate aligned with the current vector for MTPA control. Next, in this paper, a precise estimation method for the MTC frame has been proposed. By this method, highly accurate maximum torque control can be achieved. A decoupling control algorithm based on the proposed model has also been addressed in this paper. Finally, some experimental results demonstrate the feasibility and effectiveness of the proposed method.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
A Simulated Annealing Algorithm for Maximum Common Edge Subgraph Detection in Biological Networks
Larsen, Simon; Alkærsig, Frederik G.; Ditzel, Henrik
2016-01-01
introduce a heuristic algorithm for the multiple maximum common edge subgraph problem that is able to detect large common substructures shared across multiple, real-world size networks efficiently. Our algorithm uses a combination of iterated local search, simulated annealing and a pheromone...
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Y. Labbi
2015-08-01
Full Text Available Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency.In this work, a Particle Swarm Optimization (PSO is proposed for maximum power point tracker for photovoltaic panel, are used to generate the optimal MPP, such that solar panel maximum power is generated under different operating conditions. A photovoltaic system including a solar panel and PSO MPP tracker is modelled and simulated, it has been has been carried out which has shown the effectiveness of PSO to draw much energy and fast response against change in working conditions.
Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl
2010-01-01
-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vector quantization (VQ) is investigated as a technique for temporary compression of LLRs on the terminal. A capacity...
The History and Perspectives of Efficiency at Maximum Power of the Carnot Engine
Michel Feidt
2017-07-01
Full Text Available Finite Time Thermodynamics is generally associated with the Curzon–Ahlborn approach to the Carnot cycle. Recently, previous publications on the subject were discovered, which prove that the history of Finite Time Thermodynamics started more than sixty years before even the work of Chambadal and Novikov (1957. The paper proposes a careful examination of the similarities and differences between these pioneering works and the consequences they had on the works that followed. The modelling of the Carnot engine was carried out in three steps, namely (1 modelling with time durations of the isothermal processes, as done by Curzon and Ahlborn; (2 modelling at a steady-state operation regime for which the time does not appear explicitly; and (3 modelling of transient conditions which requires the time to appear explicitly. Whatever the method of modelling used, the subsequent optimization appears to be related to specific physical dimensions. The main goal of the methodology is to choose the objective function, which here is the power, and to define the associated constraints. We propose a specific approach, focusing on the main functions that respond to engineering requirements. The study of the Carnot engine illustrates the synthesis carried out and proves that the primary interest for an engineer is mainly connected to what we called Finite (physical Dimensions Optimal Thermodynamics, including time in the case of transient modelling.
Combined design of recurve actuators and drive electronics for maximum energy efficiency
Seresta, Omprakash; Ragon, Scott A.; Zhu, Huiyu; Gurdal, Zafer; Lindner, Douglas K.
2004-07-01
Smart structures typically consist of many interacting components, which result in a closed loop formed by an actuator, structure, sensors, controller, and drive circuit components. Despite the recognition of component interactions, much of the traditional design approach for such systems is highly compartmentalized and sequential. The primary objective of the present work is to develop a basic understanding of the energy flow and dynamic interaction between the electrical and mechanical subsystems of smart actuators. When operating from portable power sources, a crucial factor in determining the performance of such a smart system is the battery capacity required for the actuator to operate through a given time span along with its life time. The real and reactive power in such a system will determine the battery life and size separately. While the real power is dissipated only in the drive circuit, the reactive power of the circuit and the actuator cannot be calculated individually, where the interaction arises. Multi-objective function optimization problem, which combines the real and reactive power by different weights, will result in a better balanced solution than optimizing either one of them separately. Genetic algorithm is applied for discrete component selection to generate more realistic designs. The optimization result is illustrated in the paper, as well as their relationship with multi-objective functions.
Ahmad Rafidi M.A.
2014-01-01
Full Text Available The synchronization of traffic light systems is one of the best solutions in order to avoid problematic traffic jams. Traffic timing is a major concern when it comes to traffic management. One of the common causes of traffic jams is because of nonsynchronized traffic light systems. Once a light turns green, traffic begins to move, but by the time the moving traffic reaches the next light, the signal is still red. This will disrupt the continuity of the traffic flow, especially for large main roads. The smooth flow of traffic on main routes is important to clear dense traffic in a given time. This study examined the density of vehicles on Jalan Bukit Gambier and also the traffic timing was documented in order to plan out proper re-timing for traffic lights along the studied road. The outcomes of this study support the hypothesis that retiming traffic lights to create a synchronized traffic light system for main roads will greatly improve traffic flow.
What is the maximum efficiency with which photosynthesis can convert solar energy into biomass?
Current photosynthesis is directly or indirectly the source of all of our food and fiber and is increasingly looked on as a potential source of renewable fuels. Increasing world population, improving economic status of portions of the developing world, and limited scope for recruitment of additional...
Determination of the Maximum Aerodynamic Efficiency of Wind Turbine Rotors with Winglets
Gaunaa, Mac; Johansen, Jeppe [Senior Scientists, Risoe National Laboratory, Roskilde, DK-4000 (Denmark)
2007-07-15
The present work contains theoretical considerations and computational results on the nature of using winglets on wind turbines. The theoretical results presented show that the power augmentation obtainable with winglets is due to a reduction of tip-effects, and is not, as believed up to now, caused by the downwind vorticity shift due to downwind winglets. The numerical work includes optimization of the power coefficient for a given tip speed ratio and geometry of the span using a newly developed free wake lifting line code, which takes into account also viscous effects and self induced forces. Validation of the new code with CFD results for a rotor without winglets showed very good agreement. Results from the new code with winglets indicate that downwind winglets are superior to upwind ones with respect to optimization of Cp, and that the increase in power production is less than what may be obtained by a simple extension of the wing in the radial direction. The computations also show that shorter downwind winglets (>2%) come close to the increase in Cp obtained by a radial extension of the wing. Lastly, the results from the code are used to design a rotor with a 2% downwind winglet, which is computed using the Navier-Stokes solver EllipSys3D. These computations show that further work is needed to validate the FWLL code for cases where the rotor is equipped with winglets.
Finding optimum airfoil shape to get maximum aerodynamic efficiency for a wind turbine
Sogukpinar, Haci; Bozkurt, Ismail
2017-02-01
In this study, aerodynamic performances of S-series wind turbine airfoil of S 825 are investigated to find optimum angle of attack. Aerodynamic performances calculations are carried out by utilization of a Computational Fluid Dynamics (CFD) method withstand finite capacity approximation by using Reynolds-Averaged-Navier Stokes (RANS) theorem. The lift and pressure coefficients, lift to drag ratio of airfoil S 825 are analyzed with SST turbulence model then obtained results crosscheck with wind tunnel data to verify the precision of computational Fluid Dynamics (CFD) approximation. The comparison indicates that SST turbulence model used in this study can predict aerodynamics properties of wind blade.
Rahat, Alma A M; Everson, Richard M; Fieldsend, Jonathan E
2015-01-01
Mesh network topologies are becoming increasingly popular in battery-powered wireless sensor networks, primarily because of the extension of network range. However, multihop mesh networks suffer from higher energy costs, and the routing strategy employed directly affects the lifetime of nodes with limited energy resources. Hence when planning routes there are trade-offs to be considered between individual and system-wide battery lifetimes. We present a multiobjective routing optimisation approach using hybrid evolutionary algorithms to approximate the optimal trade-off between the minimum lifetime and the average lifetime of nodes in the network. In order to accomplish this combinatorial optimisation rapidly, our approach prunes the search space using k-shortest path pruning and a graph reduction method that finds candidate routes promoting long minimum lifetimes. When arbitrarily many routes from a node to the base station are permitted, optimal routes may be found as the solution to a well-known linear program. We present an evolutionary algorithm that finds good routes when each node is allowed only a small number of paths to the base station. On a real network deployed in the Victoria & Albert Museum, London, these solutions, using only three paths per node, are able to achieve minimum lifetimes of over 99% of the optimum linear program solution's time to first sensor battery failure.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Global Maximum Power Point Tracking of Photovoltaic Array under Partial Shaded Conditions
G.Shobana, P. Sornadeepika, Dr. R. Ramaprabha
2013-07-01
Full Text Available Efficiency of the PV module can be improved by operating at its peak power point so that the maximum power can be delivered to the load under varying environmental conditions. This paper is mainly focused on the maximum power point tracking of solar photovoltaic array (PV under non uniform insolation conditions. A maximum power point tracker (MPPT is used for extracting the maximum power from the solar PV module and transferring that power to the load. The problem of maximum power point (MPP tracking becomes a problem when the array receives non uniform insolation. Cells under shade absorb a large amount of electric power generated by cells receiving high insolation and convert it into heat which may damage the low illuminated cells. To relieve the stress on shaded cells, bypass diodes are added across the modules. In such a case multiple peaks in voltagepower characteristics are observed. Classical MPPT methods are not effective due to their inability to discriminate between local and global maximum. In this paper, Global MPPT algorithm is proposed to track the global maximum power point of PV array under partial shaded conditions.
Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.
2017-02-01
Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.
Maximum Spin of Black Holes Driving Jets
Benson, Andrew J
2009-01-01
Unbounded outflows in the form of highly collimated jets and broad winds appear to be a ubiquitous feature of accreting black hole systems. The most powerful jets are thought to derive a significant fraction, if not the majority, of their power from the rotational energy of the black hole. Whatever the precise mechanism that causes them, these jets must therefore exert a braking torque on the black hole. We calculate the spin-up function for an accreting black hole, accounting for this braking torque. We find that the predicted black hole spin-up function depends only on the black hole spin and dimensionless parameters describing the accretion flow. Using recent relativistic magnetohydrodynamical numerical simulation results to calibrate the efficiency of angular momentum transfer in the flow, we find that an ADAF flow will spin a black hole up (or down) to an equilibrium value of about 96% of the maximal spin value in the absence of jets. Combining our ADAF system with a simple model for jet power, we demons...
Eliyahu, I., E-mail: ilan.eliyahu@gmail.com [Ben Gurion University of the Negev, Beersheva 84105 (Israel); Soreq Nuclear Research Center, Yavne 81800 (Israel); Horowitz, Y.S. [Ben Gurion University of the Negev, Beersheva 84105 (Israel); Oster, L. [Sami Shamoon College of Engineering, Beersheva 84100 (Israel); Weissman, L.; Kreisel, A. [Soreq Nuclear Research Center, Yavne 81800 (Israel); Girshevitz, O. [Bar Ilan University, Ramat Gan 5290002, Israel. (Israel); Marino, S. [Radiological Research Accelerator Facility, Irvington, New York (United States); Druzhyna, S. [Ben Gurion University of the Negev, Beersheva 84105 (Israel); Biderman, S. [Nuclear Research Center Negev, Beersheva (Israel); Mardor, I. [Soreq Nuclear Research Center, Yavne 81800 (Israel)
2015-04-15
A major objective of track structure theory (TST) is the calculation of heavy charged particle (HCP) induced effects. Previous calculations have been based exclusively on the radiation action/dose response of the released secondary electrons during the HCP slowing down. The validity of this presumption is investigated herein using optical absorption (OA) measurements on LiF:Mg,Ti (TLD-100) samples following irradiation with 1.4 MeV protons and 4 MeV He ions at levels of fluence from 10{sup 10} cm{sup −2} to 2 × 10{sup 14} cm{sup −2}. The major bands in the OA spectrum are the 5.08 eV (F band), 4.77 eV, 5.45 eV and the 4.0 eV band (associated with the trapping structure leading to composite peak 5 in the thermoluminescence (TL) glow curve). The maximum intensity of composite peak 5 occurs at a temperature of ∼200 °C in the glow curve and is the glow peak used for most dosimetric applications. The TST calculations use experimentally measured OA dose response following low ionization density (LID) {sup 60}Co photon irradiation over the dose-range 10–10{sup 5} Gy for the simulation of the radiation action of the HCP induced secondary electron spectrum. Following proton and He irradiation the saturation levels of concentration for the F band and the 4.77 eV band are approximately one order of magnitude greater than following LID irradiation indicating enhanced HCP creation of the relevant defects. Relative HCP OA efficiencies, η{sub HCP}, are calculated by TST and are compared with experimentally measured values, η{sub m}, at levels of fluence from 10{sup 10} cm{sup −2} to 10{sup 11} cm{sup −2} where the response is linear due to negligible track overlap. For the F band, values of η{sub m}/η{sub HCP} = 2.0 and 2.6 for the He ions and protons respectively arise from the neglect of enhanced Fluorine vacancy/F center creation by the HCPs in the TST calculations. It is demonstrated that kinetic analysis simulating LID F band dose response with enhanced
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Energy efficiency standards and innovation
Morrison, Geoff
2015-01-01
Van Buskirk et al (2014 Environ. Res. Lett. 9 114010) demonstrate that the purchase price, lifecycle cost and price of improving efficiency (i.e. the incremental price of efficiency gain) decline at an accelerated rate following the adoption of the first energy efficiency standards for five consumer products. The authors show these trends using an experience curve framework (i.e. price/cost versus cumulative production). While the paper does not draw a causal link between standards and declining prices, they provide suggestive evidence using markets in the US and Europe. Below, I discuss the potential implications of the work.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Millisecond Pulsar Ages: Implications of Binary Evolution and a Maximum Spin Frequency
Kiziltan, Bulent
2009-01-01
In the absence of constraints from the binary companion or supernova remnant, the standard method for estimating pulsar ages is to infer an age from the rate of spin-down. While the generic spin-down age may give realistic estimates for normal pulsars, it can fail for pulsars with very short periods. Details of the spin-up process during the low mass X-ray binary phase pose additional constraints on the period (P) and spin-down rates (Pdot) that may consequently affect the age estimate. Here, we propose a new recipe to estimate millisecond pulsar (MSP) ages that parametrically incorporates constraints arising from binary evolution and limiting physics. We show that the standard method can be improved by this approach to achieve age estimates closer to the true age whilst the standard spin-down age may over- or under-estimate the age of the pulsar by more than a factor of ~10 in the millisecond regime. We use this approach to analyze the population on a broader scale. For instance, in order to understand the d...
Schwickerath, Ulrich; Silva, Ricardo; Uria, Christian, E-mail: Ulrich.Schwickerath@cern.c, E-mail: Ricardo.Silva@cern.c [CERN IT, 1211 Geneve 23 (Switzerland)
2010-04-01
A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.
Schwickerath, U; Uria, C; CERN. Geneva. IT Department
2010-01-01
A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis
Bo Chen; Hui He; Jun Guo
2008-01-01
Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Blind Joint Maximum Likelihood Channel Estimation and Data Detection for SIMO Systems
Sheng Chen; Xiao-Chen Yang; Lei Chen; Lajos Hanzo
2007-01-01
A blind adaptive scheme is proposed for joint maximum likelihood (ML) channel estimation and data detection of singleinput multiple-output (SIMO) systems. The joint ML optimisation over channel and data is decomposed into an iterative optimisation loop. An efficient global optimisation algorithm called the repeated weighted boosting search is employed at the upper level to optimally identify the unknown SIMO channel model, and the Viterbi algorithm is used at the lower level to produce the maximum likelihood sequence estimation of the unknown data sequence. A simulation example is used to demonstrate the effectiveness of this joint ML optimisation scheme for blind adaptive SIMO systems.
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the
Rudin, A.
1995-05-01
This article is a review of utility policy and public opinion related to energy efficiency. The historical background is presented, and the current socioeconomic status is also presented. Many fallacies of past utility policies intended to promote conservation are noted, and it is demonstrated that past policies have not been effective, i.e. the cost of electricity has increased. Given the failure of past practices, fourteen recommendations for future practices are set forth.
Maximum Matchings of a Digraph Based on the Largest Geometric Multiplicity
Yunyun Yang
2016-01-01
Full Text Available Matching theory is one of the most forefront issues of graph theory. Based on the largest geometric multiplicity, we develop an efficient approach to identify maximum matchings in a digraph. For a given digraph, it has been proved that the number of maximum matched nodes has close relationship with the largest geometric multiplicity of the transpose of the adjacency matrix. Moreover, through fundamental column transformations, we can obtain the matched nodes and related matching edges. In particular, when a digraph contains a cycle factor, the largest geometric multiplicity is equal to one. In this case, the maximum matching is a perfect matching and each node in the digraph is a matched node. The method is validated by an example.
Maximum-Entropy Method for Evaluating the Slope Stability of Earth Dams
Shuai Wang
2012-10-01
Full Text Available The slope stability is a very important problem in geotechnical engineering. This paper presents an approach for slope reliability analysis based on the maximum-entropy method. The key idea is to implement the maximum entropy principle in estimating the probability density function. The performance function is formulated by the Simplified Bishop’s method to estimate the slope failure probability. The maximum-entropy method is used to estimate the probability density function (PDF of the performance function subject to the moment constraints. A numerical example is calculated and compared to the Monte Carlo simulation (MCS and the Advanced First Order Second Moment Method (AFOSM. The results show the accuracy and efficiency of the proposed method. The proposed method should be valuable for performing probabilistic analyses.
Extension Sliding Mode Controller for Maximum Power Point Tracking of Hydrogen Fuel Cells
Meng-Hui Wang
2013-01-01
Full Text Available Fuel cells (FCs are characterized by low pollution, low noise, and high efficiency. However, the voltage-current response of an FC is nonlinear, with the result that there exists just one operating point which maximizes the output power given a particular set of operating conditions. Accordingly, the present study proposes a maximum power point tracking (MPPT control scheme based on extension theory to stabilize the output of an FC at the point of maximum power. The simulation results confirm the ability of the controller to stabilize the output power at the maximum power point despite sudden changes in the temperature, hydrogen pressure, and membrane water content. Moreover, the transient response time of the proposed controller is shown to be faster than that of existing sliding mode (SM and extremum seeking (ES controllers.
Overview of Maximum Power Point Tracking Techniques for Photovoltaic Energy Production Systems
Koutroulis, Eftichios; Blaabjerg, Frede
2015-01-01
of photovoltaic sources during stochastically varying solar irradiation and ambient temperature conditions. Thus, the overall efficiency of the photovoltaic energy production system is increased. Numerous techniques have been presented during the last decade for implementing the maximum power point tracking......A substantial growth of the installed photovoltaic systems capacity has occurred around the world during the last decade, thus enhancing the availability of electric energy in an environmentally friendly way. The maximum power point tracking technique enables maximization of the energy production...... process in a photovoltaic system. This article provides an overview of the operating principles of these techniques, which are suited for either uniform or non-uniform solar irradiation conditions. The operational characteristics and implementation requirements of these maximum power point tracking...
Efficient ICT for efficient smart grids
Smit, Gerardus Johannes Maria
2012-01-01
In this extended abstract the need for efficient and reliable ICT is discussed. Efficiency of ICT not only deals with energy-efficient ICT hardware, but also deals with efficient algorithms, efficient design methods, efficient networking infrastructures, etc. Efficient and reliable ICT is a
ORGANIZATIONAL ASSESSMENT: EFFECTIVENESS VS. EFFICIENCY
Ilona Bartuševičienė
2013-06-01
Full Text Available Purpose – Organizational assessment has always been the key element of the discussion among scientists as well as business people. While managers are striving for better performance results, scientists are reaching for best ways to evaluate the organization. One of the most common ways to assess the performance of the entity is to measure the effectiveness or the efficiency of the organization. Those two concepts might look synonymous, yet as the findings revealed they have a distinct meaning. The purpose of this article is to reveal those differences and explore organizational assessment within effectiveness and efficiency plane. Design/methodology/approach – Scientific literature analysis, comparative and summarization methods will be used during the research to better understand the challenges of the issue. Findings – Effectiveness and efficiency are exclusive performance measures, which entities can use to assess their performance. Efficiency is oriented towards successful input transformation into outputs, where effectiveness measures how outputs interact with the economic and social environment. Research limitations/implications –In some cases effectiveness concept is being used to reflect overall performance of the organization, since it is a broader concept compared to the efficiency. It gets challenging to explore the efficiency factor if it is included under effectiveness assessment. Practical implications – The assessment of the organizational performance helps companies to improve their reports, assures smoother competition in the global market and creates a sustainable competitive advantage. Originality/Value – The paper revealed that organization can be assessed either within effectiveness or efficiency perspective. Organization striving for excellent performance should be effective and efficient, yet as the findings revealed, inefficient, yet effective organization can still survive yet at a high cost. Keywords
Realworld maximum power point tracking simulation of PV system based on Fuzzy Logic control
Othman, Ahmed M.; El-arini, Mahdi M. M.; Ghitas, Ahmed; Fathy, Ahmed
2012-12-01
In the recent years, the solar energy becomes one of the most important alternative sources of electric energy, so it is important to improve the efficiency and reliability of the photovoltaic (PV) systems. Maximum power point tracking (MPPT) plays an important role in photovoltaic power systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize their array efficiency. This paper presents a maximum power point tracker (MPPT) using Fuzzy Logic theory for a PV system. The work is focused on the well known Perturb and Observe (P&O) algorithm and is compared to a designed fuzzy logic controller (FLC). The simulation work dealing with MPPT controller; a DC/DC Ćuk converter feeding a load is achieved. The results showed that the proposed Fuzzy Logic MPPT in the PV system is valid.
A Pseudo-Boolean Solution to the Maximum Quartet Consistency Problem
Morgado, Antonio
2008-01-01
Determining the evolutionary history of a given biological data is an important task in biological sciences. Given a set of quartet topologies over a set of taxa, the Maximum Quartet Consistency (MQC) problem consists of computing a global phylogeny that satisfies the maximum number of quartets. A number of solutions have been proposed for the MQC problem, including Dynamic Programming, Constraint Programming, and more recently Answer Set Programming (ASP). ASP is currently the most efficient approach for optimally solving the MQC problem. This paper proposes encoding the MQC problem with pseudo-Boolean (PB) constraints. The use of PB allows solving the MQC problem with efficient PB solvers, and also allows considering different modeling approaches for the MQC problem. Initial results are promising, and suggest that PB can be an effective alternative for solving the MQC problem.
Maximum Energy Extraction Control for Wind Power Generation Systems Based on the Fuzzy Controller
Kamal, Elkhatib; Aitouche, Abdel; Mohammed, Walaa; Sobaih, Abdel Azim
2016-10-01
This paper presents a robust controller for a variable speed wind turbine with a squirrel cage induction generator (SCIG). For variable speed wind energy conversion system, the maximum power point tracking (MPPT) is a very important requirement in order to maximize the efficiency. The system is nonlinear with parametric uncertainty and subject to large disturbances. A Takagi-Sugeno (TS) fuzzy logic is used to model the system dynamics. Based on the TS fuzzy model, a controller is developed for MPPT in the presence of disturbances and parametric uncertainties. The proposed technique ensures that the maximum power point (MPP) is determined, the generator speed is controlled and the closed loop system is stable. Robustness of the controller is tested via the variation of model's parameters. Simulation studies clearly indicate the robustness and efficiency of the proposed control scheme compared to other techniques.