Osterloh, Frank E
2014-10-02
The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism.
Cushing, Scott K; Bristow, Alan D; Wu, Nianqiang
2015-11-28
Plasmonics can enhance solar energy conversion in semiconductors by light trapping, hot electron transfer, and plasmon-induced resonance energy transfer (PIRET). The multifaceted response of the plasmon and multiple interaction pathways with the semiconductor makes optimization challenging, hindering design of efficient plasmonic architectures. Therefore, in this paper we use a density matrix model to capture the interplay between scattering, hot electrons, and dipole-dipole coupling through the plasmon's dephasing, including both the coherent and incoherent dynamics necessary for interactions on the plasmon's timescale. The model is extended to Shockley-Queisser limit calculations for both photovoltaics and solar-to-chemical conversion, revealing the optimal application of each enhancement mechanism based on plasmon energy, semiconductor energy, and plasmon dephasing. The results guide application of plasmonic solar-energy harvesting, showing which enhancement mechanism is most appropriate for a given semiconductor's weakness, and what nanostructures can achieve the maximum enhancement.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Efficiency at maximum power for an Otto engine with ideal feedback
Wang, Honghui; He, Jizhou; Wang, Jianhui; Wu, Zhaoqi
2016-10-01
We propose an Otto heat engine that undergoes processes involving a special class of feedback and analyze theoretically its response. We use stochastic thermodynamics to determine the performance characteristics of the heat engine and indicate the possibility that its maximum efficiency can surpass the Carnot value. The analytical expression for efficiency at maximum power, including the effects resulting from feedback, reduces to that previously derived based on an engine without feedback.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
Recent advance on the efficiency at maximum power of heat engines
Tu Zhan-Chun
2012-01-01
This review reports several key advances on the theoretical investigations of efficiency at maximum power of heat engines in the past five years.The analytical results of efficiency at maximum power for the Curzon-Ahlborn heat engine,the stochastic heat engine constructed from a Brownian particle,and Feynman's ratchet as a heat engine are presented.It is found that:the efficiency at maximum power exhibits universal behavior at small relative temperature differences; the lower and the upper bounds might exist under quite general conditions; and the problem of efficiency at maximum power comes down to seeking for the minimum irreversible entropy production in each finite-time isothermal process for a given time.
Theoretical Evaluation of the Maximum Work of Free-Piston Engine Generators
Kojima, Shinji
2017-01-01
Utilizing the adjoint equations that originate from the calculus of variations, we have calculated the maximum thermal efficiency that is theoretically attainable by free-piston engine generators considering the work loss due to friction and Joule heat. Based on the adjoint equations with seven dimensionless parameters, the trajectory of the piston, the histories of the electric current, the work done, and the two kinds of losses have been derived in analytic forms. Using these we have conducted parametric studies for the optimized Otto and Brayton cycles. The smallness of the pressure ratio of the Brayton cycle makes the net work done negative even when the duration of heat addition is optimized to give the maximum amount of heat addition. For the Otto cycle, the net work done is positive, and both types of losses relative to the gross work done become smaller with the larger compression ratio. Another remarkable feature of the optimized Brayton cycle is that the piston trajectory of the heat addition/disposal process is expressed by the same equation as that of an adiabatic process. The maximum thermal efficiency of any combination of isochoric and isobaric heat addition/disposal processes, such as the Sabathe cycle, may be deduced by applying the methods described here.
Efficiency at maximum power of thermally coupled heat engines.
Apertet, Y; Ouerdane, H; Goupil, C; Lecoeur, Ph
2012-04-01
We study the efficiency at maximum power of two coupled heat engines, using thermoelectric generators (TEGs) as engines. Assuming that the heat and electric charge fluxes in the TEGs are strongly coupled, we simulate numerically the dependence of the behavior of the global system on the electrical load resistance of each generator in order to obtain the working condition that permits maximization of the output power. It turns out that this condition is not unique. We derive a simple analytic expression giving the relation between the electrical load resistance of each generator permitting output power maximization. We then focus on the efficiency at maximum power (EMP) of the whole system to demonstrate that the Curzon-Ahlborn efficiency may not always be recovered: The EMP varies with the specific working conditions of each generator but remains in the range predicted by irreversible thermodynamics theory. We discuss our results in light of nonideal Carnot engine behavior.
Systematic measurement of maximum efficiencies and detuning lengths at the JAERI free-electron laser
Nishimori, N; Nagai, R; Minehara, E J
2002-01-01
We made a systematic measurement of efficiency detuning curves at several gain and loss parameters. The absolute detuning length (delta L) of an optical cavity was measured within an accuracy of 0.1 mu m around the maximum efficiency by a pulse-stacking method using an external laser. The FEL gain was controlled by the undulator gap instead of bunch charge, because we can change the gain rapidly while maintaining constant electron bunch conditions. For the high-gain and low-loss regions, the maximum efficiency is obtained at delta L=0 mu m and is larger than the value derived from the theoretical scaling law in the superradiant regime, while for the low-gain region the maximum efficiency is obtained for delta L shorter than 0 mu m and is similar to the scaling law.
A Realization of Theoretical Maximum Performance in IPSec on Gigabit Ethernet
Onuki, Atsushi; Takeuchi, Kiyofumi; Inada, Toru; Tokiniwa, Yasuhisa; Ushirozawa, Shinobu
This paper describes “IPSec(IP Security) VPN system" and how it attains a theoretical maximum performance on Gigabit Ethernet. The Conventional System is implemented by software. However, the system has several bottlenecks which must be overcome to realize a theoretical maximum performance on Gigabit Ethernet. Thus, we newly propose IPSec VPN System with the FPGA(Field Programmable Gate Array) based hardware architecture, which transmits a packet by the pipe-lined flow processing and has 6 parallel structure of encryption and authentication engines. We show that our system attains the theoretical maximum performance in the short packet which is difficult to realize until now.
Efficiency at Maximum Power of Low-Dissipation Carnot Engines
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered.
Efficiency at maximum power of low-dissipation Carnot engines.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-√Tc/Th] is recovered.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
AN EFFICIENT APPROXIMATE MAXIMUM LIKELIHOOD SIGNAL DETECTION FOR MIMO SYSTEMS
Cao Xuehong
2007-01-01
This paper proposes an efficient approximate Maximum Likelihood (ML) detection method for Multiple-Input Multiple-Output (MIMO) systems, which searches local area instead of exhaustive search and selects valid search points in each transmit antenna signal constellation instead of all hyperplane. Both of the selection and search complexity can be reduced significantly. The method performs the tradeoff between computational complexity and system performance by adjusting the neighborhood size to select the valid search points. Simulation results show that the performance is comparable to that of the ML detection while the complexity is only as the small fraction of ML.
On a robust and efficient maximum depth estimator
ZUO YiJun; LAI ShaoYong
2009-01-01
The best breakdown point robustness is one of the most outstanding features of the univariate median. For this robustness property, the median, however, has to pay the price of a low efficiency at normal and other light-tailed models. Affine equivariant multivariate analogues of the univariate median with high breakdown points were constructed in the past two decades. For the high breakdown robustness, most of them also have to sacrifice their efficiency at normal and other models,nevertheless. The affine equivariant maximum depth estimator proposed and studied in this paper turns out to be an exception. Like the univariate median, it also possesses a highest breakdown point among all its multivariate competitors. Unlike the univariate median, it is also highly efficient relative to the sample mean at normal and various other distributions, overcoming the vital low-efficiency shortcoming of the univariate and other multivariate generalized medians. The paper also studies the asymptotics of the estimator and establishes its limit distribution without symmetry and other strong assumptions that are typically imposed on the underlying distribution.
Efficiency at maximum power of a chemical engine.
Hooyberghs, Hans; Cleuren, Bart; Salazar, Alberto; Indekeu, Joseph O; Van den Broeck, Christian
2013-10-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power η(mp) [corrected] takes the form 1/2+cΔμ+O(Δμ(2)), with 1∕2 a universal constant and Δμ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in η(mp) [corrected] is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model, we obtain η(mp) = 1/(θ + 1) [corrected], with θ > 0 the power of Δμ in the transport equation.
Efficiency at maximum power of a chemical engine
Hooyberghs, Hans; Salazar, Alberto; Indekeu, Joseph O; Broeck, Christian Van den
2013-01-01
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power $\\eta$ takes the form 1/2+c\\Delta \\mu + O(\\Delta \\mu^2), with 1/2 a universal constant and $\\Delta \\mu$ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in $\\eta$ is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model we obtain \\eta = 1/(\\theta +1), with \\theta >0 the power of $\\Delta \\mu$ in the transport equation
Theoretical study of rock mass investigation efficiency
Holmen, Johan G.; Outters, Nils [Golder Associates, Uppsala (Sweden)
2002-05-01
The study concerns a mathematical modelling of a fractured rock mass and its investigations by use of theoretical boreholes and rock surfaces, with the purpose of analysing the efficiency (precision) of such investigations and determine the amount of investigations necessary to obtain reliable estimations of the structural-geological parameters of the studied rock mass. The study is not about estimating suitable sample sizes to be used in site investigations.The purpose of the study is to analyse the amount of information necessary for deriving estimates of the geological parameters studied, within defined confidence intervals and confidence level In other words, how the confidence in models of the rock mass (considering a selected number of parameters) will change with amount of information collected form boreholes and surfaces. The study is limited to a selected number of geometrical structural-geological parameters: Fracture orientation: mean direction and dispersion (Fisher Kappa and SRI). Different measures of fracture density (P10, P21 and P32). Fracture trace-length and strike distributions as seen on horizontal windows. A numerical Discrete Fracture Network (DFN) was used for representation of a fractured rock mass. The DFN-model was primarily based on the properties of an actual fracture network investigated at the Aespoe Hard Rock Laboratory. The rock mass studied (DFN-model) contained three different fracture sets with different orientations and fracture densities. The rock unit studied was statistically homogeneous. The study includes a limited sensitivity analysis of the properties of the DFN-model. The study is a theoretical and computer-based comparison between samples of fracture properties of a theoretical rock unit and the known true properties of the same unit. The samples are derived from numerically generated boreholes and surfaces that intersect the DFN-network. Two different boreholes are analysed; a vertical borehole and a borehole that is
Maximum efficiency of low-dissipation heat engines at arbitrary power
Holubec, Viktor; Ryabov, Artem
2016-07-01
We investigate maximum efficiency at a given power for low-dissipation heat engines. Close to maximum power, the maximum gain in efficiency scales as a square root of relative loss in power and this scaling is universal for a broad class of systems. For low-dissipation engines, we calculate the maximum gain in efficiency for an arbitrary fixed power. We show that engines working close to maximum power can operate at considerably larger efficiency compared to the efficiency at maximum power. Furthermore, we introduce universal bounds on maximum efficiency at a given power for low-dissipation heat engines. These bounds represent direct generalization of the bounds on efficiency at maximum power obtained by Esposito et al (2010 Phys. Rev. Lett. 105 150603). We derive the bounds analytically in the regime close to maximum power and for small power values. For the intermediate regime we present strong numerical evidence for the validity of the bounds.
Maximum herd efficiency in meat production I. Optima for slaughter ...
changes in product value are important, it is easy to join them to herd cost efficiency for ... should be evaluated in terms of total herd or life cycle effi- ciency, and not only for a ..... The decline of herd efficiency with increases in b in. Table 2 is in ...
Efficiency at maximum power of a quantum heat engine based on two coupled oscillators.
Wang, Jianhui; Ye, Zhuolin; Lai, Yiming; Li, Weisheng; He, Jizhou
2015-06-01
We propose and theoretically investigate a system of two coupled harmonic oscillators as a heat engine. We show how these two coupled oscillators within undamped regime can be controlled to realize an Otto cycle that consists of two adiabatic and two isochoric processes. During the two isochores the harmonic system is embedded in two heat reservoirs at constant temperatures T(h) and T(c)(semigroup approach to model the thermal relaxation dynamics along the two isochoric processes, and we find the upper bound of efficiency at maximum power (EMP) η* to be a function of the Carnot efficiency η(C)(=1-T(c)/T(h)): η*≤η(+)≡η(C)(2)/[η(C)-(1-η(C))ln(1-η(C))], identical to those previously derived from ideal (noninteracting) microscopic, mesoscopic, and macroscopic systems.
Catalytic efficiency of enzymes: a theoretical analysis.
Hammes-Schiffer, Sharon
2013-03-26
This brief review analyzes the underlying physical principles of enzyme catalysis, with an emphasis on the role of equilibrium enzyme motions and conformational sampling. The concepts are developed in the context of three representative systems, namely, dihydrofolate reductase, ketosteroid isomerase, and soybean lipoxygenase. All of these reactions involve hydrogen transfer, but many of the concepts discussed are more generally applicable. The factors that are analyzed in this review include hydrogen tunneling, proton donor-acceptor motion, hydrogen bonding, pKa shifting, electrostatics, preorganization, reorganization, and conformational motions. The rate constant for the chemical step is determined primarily by the free energy barrier, which is related to the probability of sampling configurations conducive to the chemical reaction. According to this perspective, stochastic thermal motions lead to equilibrium conformational changes in the enzyme and ligands that result in configurations favorable for the breaking and forming of chemical bonds. For proton, hydride, and proton-coupled electron transfer reactions, typically the donor and acceptor become closer to facilitate the transfer. The impact of mutations on the catalytic rate constants can be explained in terms of the factors enumerated above. In particular, distal mutations can alter the conformational motions of the enzyme and therefore the probability of sampling configurations conducive to the chemical reaction. Methods such as vibrational Stark spectroscopy, in which environmentally sensitive probes are introduced site-specifically into the enzyme, provide further insight into these aspects of enzyme catalysis through a combination of experiments and theoretical calculations.
Efficiency at maximum power output for an engine with a passive piston
Sano, Tomohiko G.; Hayakawa, Hisao
2016-08-01
Efficiency at maximum power (MP) output for an engine with a passive piston without mechanical controls between two reservoirs is studied theoretically. We enclose a hard core gas partitioned by a massive piston in a temperature-controlled container and analyze the efficiency at MP under a heating and cooling protocol without controlling the pressure acting on the piston from outside. We find the following three results: (i) The efficiency at MP for a dilute gas is close to the Chambadal-Novikov-Curzon-Ahlborn (CNCA) efficiency if we can ignore the sidewall friction and the loss of energy between a gas particle and the piston, while (ii) the efficiency for a moderately dense gas becomes smaller than the CNCA efficiency even when the temperature difference of the reservoirs is small. (iii) Introducing the Onsager matrix for an engine with a passive piston, we verify that the tight coupling condition for the matrix of the dilute gas is satisfied, while that of the moderately dense gas is not satisfied because of the inevitable heat leak. We confirm the validity of these results using the molecular dynamics simulation and introducing an effective mean-field-like model which we call the stochastic mean field model.
Chen, Jincan; Yan, Zijun; Wu, Liqing
1996-06-01
Considering a thermoelectric generator as a heat engine cycle, the general differential equations of the temperature field inside thermoelectric elements are established by means of nonequilibrium thermodynamics. These equations are used to study the influence of heat leak, Joule's heat, and Thomson heat on the performance of the thermoelectric generator. New expressions are derived for the power output and the efficiency of the thermoelectric generator. The maximum power output is calculated and the optimal matching condition of load is determined. The maximum efficiency is discussed by a representative numerical example. The aim of this research is to provide some novel conclusions and redress some errors existing in a related investigation.
Maximum herd efficiency in meat production II. The influence of ...
efficiency involves reproduction and replacement rates, early fertility, and degree of fertility at first mating. .... For cattle and sheep, an estimate of the effect of early breeding ..... Genetic correlations among sex-limited traits in beef cattle. :. Anim.
Maziero, G C; Baunwart, C; Toledo, M C
2001-05-01
The theoretical maximum daily intakes (TMDI) of the phenolic antioxidants butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutyl hydroquinone (TBHQ) in Brazil were estimated using food consumption data derived from a household economic survey and a packaged goods market survey. The estimates were based on maximum levels of use of the food additives specified in national food standards. The calculated intakes of the three additives for the mean consumer were below the ADIs. Estimates of TMDI for BHA, BHT and TBHQ ranged from 0.09 to 0.15, 0.05 to 0.10 and 0.07 to 0.12 mg/kg of body weight, respectively. To check if the additives are actually used at their maximum authorized levels, analytical determinations of these compounds in selected food categories were carried out using HPLC with UV detection. BHT and TBHQ concentrations in foodstuffs considered to be representive sources of these antioxidants in the diet were below the respective maximum permitted levels. BHA was not detected in any of the analysed samples. Based on the maximal approach and on the analytical data, it is unlikely that the current ADI of BHA (0.5 mg/kg body weight), BHT (0.3 mg/kg body weight) and TBHQ (0.7 mg/kg body weight) will be exceeded in practice by the average Brazilian consumer.
Ouerdane, H.; Apertet, Y.; Goupil, C.; Lecoeur, Ph.
2015-07-01
Classical equilibrium thermodynamics is a theory of principles, which was built from empirical knowledge and debates on the nature and the use of heat as a means to produce motive power. By the beginning of the 20th century, the principles of thermodynamics were summarized into the so-called four laws, which were, as it turns out, definitive negative answers to the doomed quests for perpetual motion machines. As a matter of fact, one result of Sadi Carnot's work was precisely that the heat-to-work conversion process is fundamentally limited; as such, it is considered as a first version of the second law of thermodynamics. Although it was derived from Carnot's unrealistic model, the upper bound on the thermodynamic conversion efficiency, known as the Carnot efficiency, became a paradigm as the next target after the failure of the perpetual motion ideal. In the 1950's, Jacques Yvon published a conference paper containing the necessary ingredients for a new class of models, and even a formula, not so different from that of Carnot's efficiency, which later would become the new efficiency reference. Yvon's first analysis of a model of engine producing power, connected to heat source and sink through heat exchangers, went fairly unnoticed for twenty years, until Frank Curzon and Boye Ahlborn published their pedagogical paper about the effect of finite heat transfer on output power limitation and their derivation of the efficiency at maximum power, now mostly known as the Curzon-Ahlborn (CA) efficiency. The notion of finite rate explicitly introduced time in thermodynamics, and its significance cannot be overlooked as shown by the wealth of works devoted to what is now known as finite-time thermodynamics since the end of the 1970's. The favorable comparison of the CA efficiency to actual values led many to consider it as a universal upper bound for real heat engines, but things are not so straightforward that a simple formula may account for a variety of situations. The
Design of a wind turbine rotor for maximum aerodynamic efficiency
Johansen, Jeppe; Aagaard Madsen, Helge; Gaunaa, Mac;
2009-01-01
The design of a three-bladed wind turbine rotor is described, where the main focus has been highest possible mechanical power coefficient, CP, at a single operational condition. Structural, as well as off-design, issues are not considered, leading to a purely theoretical design for investigating...... and a full three-dimensional Navier-Stokes solver. Excellent agreement is obtained using the three models. Global CP reaches a value of slightly above 0.51, while global thrust coefficient CT is 0.87. The local power coefficient Cp increases to slightly above the Betz limit on the inner part of the rotor......; the local thrust coefficient Ct increases to a value above 1.1. This agrees well with the theory of de Vries, which states that including the effect of the low pressure behind the centre of the rotor stemming from the increased rotation, both Cp and Ct will increase towards the root. Towards the tip, both...
On the maximum efficiency of realistic heat engines
Miranda, E N
2012-01-01
In 1975, Courzon and Ahlborn studied a Carnot engine with thermal losses and got an expression for its efficiency that described better the performance of actual heat machines than the traditional result due to Carnot. In their original derivation, time appears explicitly and this is disappointing in the framework of classical thermodynamics. In this note a derivation is given without any explicit reference to time.
Study on maximum efficiency control strategy for induction motor
无
2007-01-01
Two new techniques for effficiency-optimization control (EOC) of induction motor drives were proposed. The first method combined Loss Model and "golden section technique", which was faster than the available methods. Secondly, the low-frequency ripple torque due to decrease of rotor flux was compensated in a feedforward manner. If load torque or speed command changed, the efficiency search algorithm would be abandoned and the rated flux would be established to get the best transient response. The close agreement between the simulation and the experimental results confirmed the validity and usefulness of the proposed techniques.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model
Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong
2016-07-01
We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T1 and T2 (autonomous heat engine performing work against the external driving force. Linearity of the system enables us to examine thermodynamic properties of the engine analytically. We find that the efficiency of the engine at maximum power ηM P is given by ηM P=1 -√{T2/T1 } . This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of ηM P to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η =η ¯ and increases monotonically until it reaches plateaus when η ≤ηL and η ≥ηR with model-dependent parameters ηR and ηL.
Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit;
2014-01-01
to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...
Quan, H T
2014-06-01
We study the maximum efficiency of a heat engine based on a small system. It is revealed that due to the finiteness of the system, irreversibility may arise when the working substance contacts with a heat reservoir. As a result, there is a working-substance-dependent correction to the Carnot efficiency. We derive a general and simple expression for the maximum efficiency of a Carnot cycle heat engine in terms of the relative entropy. This maximum efficiency approaches the Carnot efficiency asymptotically when the size of the working substance increases to the thermodynamic limit. Our study extends Carnot's result of the maximum efficiency to an arbitrary working substance and elucidates the subtlety of thermodynamic laws in small systems.
Y. Haseli
2016-05-01
Full Text Available The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov’s engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Abhijit Sinha
2014-01-01
Full Text Available A comparative analysis on thermodynamic efficiency based on maximum power & power density conditions have been performed for a solar-driven Carnot heat engine with internal irreversibility. In this analysis, the heat transfer from the hot reservoir is to be in the radiation mode and the heat transfer to the cold reservoir is to be in the convection mode. The thermodynamic efficiency function, power & power density functions have been derived and maximization of the power functions have been performed for various design parameters. From the optimum conditions, the thermal efficiencies at maximum power and power densities have been obtained. The effects of internal irreversibility, extreme temperature ratios & specific engine size in area ratio between the hot & cold reservoirs as various design parameters on thermodynamic efficiencies have been investigated for both the conditions. The efficiencies have been compared with Curzon-Ahlborn & Carnot efficiencies respectively.The analysis showed that the efficiency at maximum power output is greater than the efficiency at maximum power density. And the efficiencies can be greater than the Curzon- Ahlborn`s efficiency only for low values of design parameters.
The maximum efficiency of nano heat engines depends on more than temperature
Woods, Mischa; Ng, Nelly; Wehner, Stephanie
Sadi Carnot's theorem regarding the maximum efficiency of heat engines is considered to be of fundamental importance in the theory of heat engines and thermodynamics. Here, we show that at the nano and quantum scale, this law needs to be revised in the sense that more information about the bath other than its temperature is required to decide whether maximum efficiency can be achieved. In particular, we derive new fundamental limitations of the efficiency of heat engines at the nano and quantum scale that show that the Carnot efficiency can only be achieved under special circumstances, and we derive a new maximum efficiency for others. A preprint can be found here arXiv:1506.02322 [quant-ph] Singapore's MOE Tier 3A Grant & STW, Netherlands.
An efficient approximation algorithm for finding a maximum clique using Hopfield network learning.
Wang, Rong Long; Tang, Zheng; Cao, Qi Ping
2003-07-01
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.
Ruikun Mai
2017-02-01
Full Text Available One of the most promising inductive power transfer applications is the wireless power supply for locomotives which may cancel the need for pantographs. In order to meet the dynamic and high power demands of wireless power supplies for locomotives, a relatively long transmitter track and multiple receivers are usually adopted. However, during the dynamic charging, the mutual inductances between the transmitter and receivers vary and the load of the locomotives also changes randomly, which dramatically affects the system efficiency. A maximum efficiency point tracking control scheme is proposed to improve the system efficiency against the variation of the load and the mutual inductances between the transmitter and receivers while considering the cross coupling between receivers. Firstly, a detailed theoretical analysis on dual receivers is carried out. Then a control scheme with three control loops is proposed to regulate the receiver currents to be the same, to regulate the output voltage and to search for the maximum efficiency point. Finally, a 2 kW prototype is established to validate the performance of the proposed method. The overall system efficiency (DC-DC efficiency reaches 90.6% at rated power and is improved by 5.8% with the proposed method under light load compared with the traditional constant output voltage control method.
An Efficient Algorithm for Maximum-Entropy Extension of Block-Circulant Covariance Matrices
Carli, Francesca P; Pavon, Michele; Picci, Giorgio
2011-01-01
This paper deals with maximum entropy completion of partially specified block-circulant matrices. Since positive definite symmetric circulants happen to be covariance matrices of stationary periodic processes, in particular of stationary reciprocal processes, this problem has applications in signal processing, in particular to image modeling. Maximum entropy completion is strictly related to maximum likelihood estimation subject to certain conditional independence constraints. The maximum entropy completion problem for block-circulant matrices is a nonlinear problem which has recently been solved by the authors, although leaving open the problem of an efficient computation of the solution. The main contribution of this paper is to provide an efficient algorithm for computing the solution. Simulation shows that our iterative scheme outperforms various existing approaches, especially for large dimensional problems. A necessary and sufficient condition for the existence of a positive definite circulant completio...
Efficiency at maximum power output of quantum heat engines under finite-time operation
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, ηm, of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), ηm becomes identical to the Carnot efficiency ηC=1-Tc/Th. For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency ηm at maximum power output is bounded from above by ηC/(2-ηC) and from below by ηC/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Efficiency at maximum power output of quantum heat engines under finite-time operation.
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, η(m), of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures T(h) and T(c), respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), η(m) becomes identical to the Carnot efficiency η(C)=1-T(c)/T(h). For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency η(m) at maximum power output is bounded from above by η(C)/(2-η(C)) and from below by η(C)/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency η(CA)=1-√(T(c)/T(h)) is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Latella Ivan
2014-01-01
Full Text Available We analyse the process of conversion of near-field thermal radiation into usable work by considering the radiation emitted between two planar sources supporting surface phonon-polaritons. The maximum work flux that can be extracted from the radiation is obtained taking into account that the spectral flux of modes is mainly dominated by these surface modes. The thermodynamic efficiencies are discussed and an upper bound for the first law efficiency is obtained for this process.
Ouerdane, Henni; Goupil, Christophe; Lecoeur, Philippe
2014-01-01
[...] By the beginning of the 20th century, the principles of thermodynamics were summarized into the so-called four laws, which were, as it turns out, definitive negative answers to the doomed quests for perpetual motion machines. As a matter of fact, one result of Sadi Carnot's work was precisely that the heat-to-work conversion process is fundamentally limited; as such, it is considered as a first version of the second law of thermodynamics. Although it was derived from Carnot's unrealistic model, the upper bound on the thermodynamic conversion efficiency, known as the Carnot efficiency, became a paradigm as the next target after the failure of the perpetual motion ideal. In the 1950's, Jacques Yvon published a conference paper containing the necessary ingredients for a new class of models, and even a formula, not so different from that of Carnot's efficiency, which later would become the new efficiency reference. Yvon's first analysis [...] went fairly unnoticed for twenty years, until Frank Curzon and Bo...
Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer
Bo-Hee Choi
2016-01-01
Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r→Z transform.
3D Navier-Stokes Simulations of a rotor designed for Maximum Aerodynamic Efficiency
Johansen, Jeppe; Madsen, Helge. Aa.; Gaunaa, Mac
2007-01-01
The present paper describes the design of a three-bladed wind turbine rotor taking into account maximum aerodynamic efficiency only and not considering structural as well as offdesign issues. The rotor was designed assuming constant induction for most of the blade span, but near the tip region a ...
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures Th and Tc (Otto engine working in the linear-response regime.
Wang, Jianhui; He, Jizhou
2012-11-01
We investigate the efficiency at the maximum power output (EMP) of an irreversible Carnot engine performing finite-time cycles between two reservoirs at constant temperatures T(h) and T(c) (Carnot efficiency, whether the internally dissipative friction is considered or not. When dissipations of two "isothermal" and two "adiabatic" processes are symmetric, respectively, and the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation, the Curzon-Ahlborn (CA) efficiency η(CA) = 1-sqrt[T(c)/T(h)] is derived.
Efficiency at and near maximum power of low-dissipation heat engines.
Holubec, Viktor; Ryabov, Artem
2015-11-01
A universality in optimization of trade-off between power and efficiency for low-dissipation Carnot cycles is presented. It is shown that any trade-off measure expressible in terms of efficiency and the ratio of power to its maximum value can be optimized independently of most details of the dynamics and of the coupling to thermal reservoirs. The result is demonstrated on two specific trade-off measures. The first one is designed for finding optimal efficiency for a given output power and clearly reveals diseconomy of engines working at maximum power. As the second example we derive universal lower and upper bounds on the efficiency at maximum trade-off given by the product of power and efficiency. The results are illustrated on a model of a diffusion-based heat engine. Such engines operate in the low-dissipation regime given that the used driving minimizes the work dissipated during the isothermal branches. The peculiarities of the corresponding optimization procedure are reviewed and thoroughly discussed.
Efficiency at and near maximum power of low-dissipation heat engines
Holubec, Viktor; Ryabov, Artem
2015-11-01
A universality in optimization of trade-off between power and efficiency for low-dissipation Carnot cycles is presented. It is shown that any trade-off measure expressible in terms of efficiency and the ratio of power to its maximum value can be optimized independently of most details of the dynamics and of the coupling to thermal reservoirs. The result is demonstrated on two specific trade-off measures. The first one is designed for finding optimal efficiency for a given output power and clearly reveals diseconomy of engines working at maximum power. As the second example we derive universal lower and upper bounds on the efficiency at maximum trade-off given by the product of power and efficiency. The results are illustrated on a model of a diffusion-based heat engine. Such engines operate in the low-dissipation regime given that the used driving minimizes the work dissipated during the isothermal branches. The peculiarities of the corresponding optimization procedure are reviewed and thoroughly discussed.
Liarte, Danilo B; Transtrum, Mark K; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P
2016-01-01
We review our work on theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces. These limits are of key relevance to current and future accelerating cavities, especially those made of new higher-$T_c$ materials such as Nb$_3$Sn, NbN, and MgB$_2$. We summarize our calculations of the so-called superheating field $H_{\\mathrm{sh}}$, beyond which flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and disorder. Will we need to control surface orientation in the layered compound MgB$_2$? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. T...
J. G. Dyke; Kleidon, A.
2010-01-01
The Maximum Entropy Production (MEP) principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the ...
Optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits.
Ozkan, Fahri; Tuna, M Cihat; Baylar, Ahmet; Ozturk, Mualla
2014-01-01
Oxygen is an important component of water quality and its ability to sustain life. Water aeration is the process of introducing air into a body of water to increase its oxygen saturation. Water aeration can be accomplished in a variety of ways, for instance, closed-conduit aeration. High-speed flow in a closed conduit involves air-water mixture flow. The air flow results from the subatmospheric pressure downstream of the gate. The air entrained by the high-speed flow is supplied by the air vent. The air entrained into the flow in the form of a large number of bubbles accelerates oxygen transfer and hence also increases aeration efficiency. In the present work, the optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits was studied experimentally. Results showed that aeration efficiency increased with the air-demand ratio to a certain point and then aeration efficiency did not change with a further increase of the air-demand ratio. Thus, there was an optimum value for the air-demand ratio, depending on the Froude number, which provides maximum aeration efficiency. Furthermore, a design formula for aeration efficiency was presented relating aeration efficiency to the air-demand ratio and Froude number.
Efficiency at maximum power of thermochemical engines with near-independent particles.
Luo, Xiaoguang; Liu, Nian; Qiu, Teng
2016-03-01
Two-reservoir thermochemical engines are established by using near-independent particles (including Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein particles) as the working substance. Particle and heat fluxes can be formed based on the temperature and chemical potential gradients between two different reservoirs. A rectangular-type energy filter with width Γ is introduced for each engine to weaken the coupling between the particle and heat fluxes. The efficiency at maximum power of each particle system decreases monotonously from an upper bound η(+) to a lower bound η(-) when Γ increases from 0 to ∞. It is found that the η(+) values for all three systems are bounded by η(C)/2 ≤ η(+) ≤ η(C)/(2-η(C)) due to strong coupling, where η(C) is the Carnot efficiency. For the Bose-Einstein system, it is found that the upper bound is approximated by the Curzon-Ahlborn efficiency: η(CA)=1-sqrt[1-η(C)]. When Γ → ∞, the intrinsic maximum powers are proportional to the square of the temperature difference of the two reservoirs for all three systems, and the corresponding lower bounds of efficiency at maximum power can be simplified in the same form of η(-)=η(C)/[1+a(0)(2-η(C))].
Efficiency at maximum power of thermochemical engines with near-independent particles
Luo, Xiaoguang; Liu, Nian; Qiu, Teng
2016-03-01
Two-reservoir thermochemical engines are established by using near-independent particles (including Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein particles) as the working substance. Particle and heat fluxes can be formed based on the temperature and chemical potential gradients between two different reservoirs. A rectangular-type energy filter with width Γ is introduced for each engine to weaken the coupling between the particle and heat fluxes. The efficiency at maximum power of each particle system decreases monotonously from an upper bound η+ to a lower bound η- when Γ increases from 0 to ∞ . It is found that the η+ values for all three systems are bounded by ηC/2 ≤η+≤ηC/(2 -ηC ) due to strong coupling, where ηC is the Carnot efficiency. For the Bose-Einstein system, it is found that the upper bound is approximated by the Curzon-Ahlborn efficiency: ηCA=1 -√{1 -ηC } . When Γ →∞ , the intrinsic maximum powers are proportional to the square of the temperature difference of the two reservoirs for all three systems, and the corresponding lower bounds of efficiency at maximum power can be simplified in the same form of η-=ηC/[1 +a0(2 -ηC ) ] .
Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.
2017-03-01
Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.
Theoretical considerations on maximum running speeds for large and small animals.
Fuentes, Mauricio A
2016-02-01
Mechanical equations for fast running speeds are presented and analyzed. One of the equations and its associated model predict that animals tend to experience larger mechanical stresses in their limbs (muscles, tendons and bones) as a result of larger stride lengths, suggesting a structural restriction entailing the existence of an absolute maximum possible stride length. The consequence for big animals is that an increasingly larger body mass implies decreasing maximal speeds, given that the stride frequency generally decreases for increasingly larger animals. Another restriction, acting on small animals, is discussed only in preliminary terms, but it seems safe to assume from previous studies that for a given range of body masses of small animals, those which are bigger are faster. The difference between speed scaling trends for large and small animals implies the existence of a range of intermediate body masses corresponding to the fastest animals.
Theoretical study of the seasonal behavior of the global ionosphere at solar maximum
Sojka, J. J.; Schunk, R. W.
1989-01-01
The seasonal behavior of the global ionosphere was studied using a time-dependent three-dimensional physical model (developed by Shunk and his coworkers) of the ionosphere at altitudes between 120 and 800 km. This model accounts for field-aligned diffusion, cross-field electrodynamic drifts both the equatorial region and at high latitudes, interhemispheric flow, thermospheric winds, polar wind escape, energy-dependent chemical reactions, neutral composition changes, ion production due to solar EUV radiation and auroral precipitation, thermal conduction, diffusion-thermal heat flow, and local heating and cooling processes. The model studies were carried out for both June and December solstice conditions at solar maximum and for low geomagnetic activity. The ionospheric features predicted by the model agreed qualitatively with the available measurements.
The ACT{sup 2} project: Demonstration of maximum energy efficiency in real buildings
Crawley, D.B. [Pacific Northwest Lab., Richland, WA (United States); Krieg, B.L. [Pacific Gas and Electric Co., San Ramon, CA (United States)
1991-11-01
A large US utility recently began a project to determine whether the use of new energy-efficient end-use technologies and systems would economically achieve substantial energy savings (perhaps as high as 75% over current practice). Using a field-based demonstration approach, the Advanced Customer Technology Test (ACT{sup 2}) for Maximum Energy Efficiency is providing information on the maximum energy savings possible when integrated packages of new high-efficiency end-use technologies are incorporated into commercial and residential buildings and industrial and agricultural processes. This paper details the underlying rationale, approach, results to date, and future plans for ACT{sup 2}. The ultimate goal is energy efficiency (doing more with less energy) rather than energy conservation (freezing in the dark). In this paper, we first explain why a major United States utility is committed to pursuing demand-side management so aggressively. Next, we discuss the approach the utility chose for conducting the ACT{sup 2} project. We then review results obtained to date from the project`s pilot demonstration site. Last, we describe other related demonstration projects being proposed by the utility.
The ACT sup 2 project: Demonstration of maximum energy efficiency in real buildings
Crawley, D.B. (Pacific Northwest Lab., Richland, WA (United States)); Krieg, B.L. (Pacific Gas and Electric Co., San Ramon, CA (United States))
1991-11-01
A large US utility recently began a project to determine whether the use of new energy-efficient end-use technologies and systems would economically achieve substantial energy savings (perhaps as high as 75% over current practice). Using a field-based demonstration approach, the Advanced Customer Technology Test (ACT{sup 2}) for Maximum Energy Efficiency is providing information on the maximum energy savings possible when integrated packages of new high-efficiency end-use technologies are incorporated into commercial and residential buildings and industrial and agricultural processes. This paper details the underlying rationale, approach, results to date, and future plans for ACT{sup 2}. The ultimate goal is energy efficiency (doing more with less energy) rather than energy conservation (freezing in the dark). In this paper, we first explain why a major United States utility is committed to pursuing demand-side management so aggressively. Next, we discuss the approach the utility chose for conducting the ACT{sup 2} project. We then review results obtained to date from the project's pilot demonstration site. Last, we describe other related demonstration projects being proposed by the utility.
Efficiency at maximum power output of linear irreversible Carnot-like heat engines.
Wang, Yang; Tu, Z C
2012-01-01
The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each "isothermal" process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form η(mP)=η(C)/(2-γη(C)), where η(C) is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of η(mP) is bounded between η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys. 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett. 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of η(mP)=η(C)/(2-γη(C)) as well as the existence of two bounds, η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)).
The maximum power efficiency 1-√τ: Research, education, and bibliometric relevance
Calvo Hernández, A.; Roco, J. M. M.; Medina, A.; Velasco, S.; Guzmán-Vargas, L.
2015-07-01
The well-known efficiency at maximum power for a cyclic system working between hot T h and low T c temperatures given by the equation 1-√ τ( τ= T c /T h), has become a landmark result with regards to the thermodynamic optimization of a great variety of energy converters. Its wide applicability and sole dependence on the external heat bath temperatures (as the Carnot efficiency does) allows for an easy comparison with experimental efficiencies leading to a striking fair agreement. Reversible, finite-time, and linear-irreversible derivations are analyzed in order to show a broader perspective about its meaning from both researching and pedagogical point of views. Its scientific relevance and historical development are also analyzed in this work by means of some bibliometric data. This article is supplemented with comments by Hong Qian and a final reply by the authors.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling.
Yan, H; Guo, Hao
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson engines. While in the short contact time limit, which corresponds to the Carnot cycle, the same efficiency bounds as that from Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] are derived. In both cases, the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases. This might provide instructions for designing real engines.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling
Yan, H.; Guo, Hao
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson engines. While in the short contact time limit, which corresponds to the Carnot cycle, the same efficiency bounds as that from Esposito [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)] are derived. In both cases, the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases. This might provide instructions for designing real engines.
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures T(h) and T(c) (power based on these two different kinds of quantum systems are bounded from the upper side by the same expression η(mp)≤η(+)≡η(C)(2)/[η(C)-(1-η(C))ln(1-η(C))] with η(C)=1-T(c)/T(h) as the Carnot efficiency. This expression η(mp) possesses the same universality of the CA efficiency η(CA)=1-√(1-η(C)) at small relative temperature difference. Within the context of irreversible thermodynamics, we calculate the Onsager coefficients and show that the value of η(CA) is indeed the upper bound of EMP for an Otto engine working in the linear-response regime.
Selva, J
2011-01-01
This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.
Apertet, Y; Ouerdane, H; Goupil, C; Lecoeur, Ph
2012-03-01
Energy conversion efficiency at maximum output power, which embodies the essential characteristics of heat engines, is the main focus of the present work. The so-called Curzon and Ahlborn efficiency η(CA) is commonly believed to be an absolute reference for real heat engines; however, a different but general expression for the case of stochastic heat engines, η(SS), was recently found and then extended to low-dissipation engines. The discrepancy between η(CA) and η(SS) is here analyzed considering different irreversibility sources of heat engines, of both internal and external types. To this end, we choose a thermoelectric generator operating in the strong-coupling regime as a physical system to qualitatively and quantitatively study the impact of the nature of irreversibility on the efficiency at maximum output power. In the limit of pure external dissipation, we obtain η(CA), while η(SS) corresponds to the case of pure internal dissipation. A continuous transition between from one extreme to the other, which may be operated by tuning the different sources of irreversibility, also is evidenced.
Stysley, Paul; Coyle, Barry; Clarke, Greg; Poulios, Demetrios; Kay, Richard
2015-01-01
The Global Ecosystems Dynamics Investigation (GEDI) is a planned mission sending a LIDAR instrument to the International Space Station that will employ three NASA laser transmitters. This instrument will produce parallel tracks on the Earth's surface that will provide global 3D vegetation canopy measurements. To meet the mission goals a total of 5 High Output Maximum Efficiency Resonator lasers will to be built (1 ETU + 3 Flight + 1 spare) in-house at NASA-GSFC. This presentation will summarize the HOMER design, the testing the design has completed in the past, and the plans to successfully build the units needed for the GEDI mission.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling
Yan, H; Guo, H.
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson eng...
Efficient Photovoltaic System Maximum Power Point Tracking Using a New Technique
Mehdi Seyedmahmoudian
2016-03-01
Full Text Available Partial shading is an unavoidable condition which significantly reduces the efficiency and stability of a photovoltaic (PV system. When partial shading occurs the system has multiple-peak output power characteristics. In order to track the global maximum power point (GMPP within an appropriate period a reliable technique is required. Conventional techniques such as hill climbing and perturbation and observation (P&O are inadequate in tracking the GMPP subject to this condition resulting in a dramatic reduction in the efficiency of the PV system. Recent artificial intelligence methods have been proposed, however they have a higher computational cost, slower processing time and increased oscillations which results in further instability at the output of the PV system. This paper proposes a fast and efficient technique based on Radial Movement Optimization (RMO for detecting the GMPP under partial shading conditions. The paper begins with a brief description of the behavior of PV systems under partial shading conditions followed by the introduction of the new RMO-based technique for GMPP tracking. Finally, results are presented to demonstration the performance of the proposed technique under different partial shading conditions. The results are compared with those of the PSO method, one of the most widely used methods in the literature. Four factors, namely convergence speed, efficiency (power loss reduction, stability (oscillation reduction and computational cost, are considered in the comparison with the PSO technique.
Bounds and phase diagram of efficiency at maximum power for tight-coupling molecular motors.
Tu, Z C
2013-02-01
The efficiency at maximum power (EMP) for tight-coupling molecular motors is investigated within the framework of irreversible thermodynamics. It is found that the EMP depends merely on the constitutive relation between the thermodynamic current and force. The motors are classified into four generic types (linear, superlinear, sublinear, and mixed types) according to the characteristics of the constitutive relation, and then the corresponding ranges of the EMP for these four types of molecular motors are obtained. The exact bounds of the EMP are derived and expressed as the explicit functions of the free energy released by the fuel in each motor step. A phase diagram is constructed which clearly shows how the region where the parameters (the load distribution factor and the free energy released by the fuel in each motor step) are located can determine whether the value of the EMP is larger or smaller than 1/2. This phase diagram reveals that motors using ATP as fuel under physiological conditions can work at maximum power with higher efficiency (> 1/2) for a small load distribution factor (< 0.1).
Raynald Labrecque
2009-11-01
Full Text Available It is known that mechanical work, and in turn electricity, can be produced from a difference in the chemical potential that may result from a salinity gradient. Such a gradient may be found, for instance, in an estuary where a stream of soft water is flooding into a sink of salty water which we may find in an ocean, gulf or salt lake. Various technological approaches are proposed for the production of energy from a salinity gradient between a stream of soft water and a source of salty water. Before considering the implementation of a typical technology, it is of utmost importance to be able to compare various technological approaches, on the same basis, using the appropriate variables and mathematical formulations. In this context, exergy balance can become a very useful tool for an easy and quick evaluation of the maximum thermodynamic work that can be produced from energy systems. In this short paper, we briefly introduce the use of exergy for enabling us to easily and quickly assess the theoretical maximum power or ideal reversible work we may expect from typical salinity gradient energy systems.
THE EFFICIENCY OF TECHNOLOGY TRANSFER – THEORETICAL AND METHODOLOGICAL APPROACH
Andreea-Clara MUNTEANU
2006-06-01
Full Text Available As the importance and complexity level of technological transfer increased, the need of adequate systems of assessing the efficiency of this process became the more obvious. Introducing sustainability criteria requires the creation of a complex framework for analysing and studying efficiency that would incorporate all other three dimensions of contemporary economic development: economic, social and environmental.
Simulation of maximum light use efficiency for some typical vegetation types in China
无
2006-01-01
Maximum light use efficiency (εmax) is a key parameter for the estimation of net primary productivity (NPP) derived from remote sensing data. There are still many divergences about its value for each vegetation type. The εmax for some typical vegetation types in China is simulated using a modified least squares function based on NOAA/AVHRR remote sensing data and field-observed NPP data. The vegetation classification accuracy is introduced to the process. The sensitivity analysis of εmax to vegetation classification accuracy is also conducted. The results show that the simulated values of εmax are greater than the value used in CASA model, and less than the values simulated with BIOME-BGC model. This is consistent with some other studies. The relative error of εmax resulting from classification accuracy is -5.5%―8.0%. This indicates that the simulated values of εmax are reliable and stable.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Maximum Efficiency of Thermoelectric Heat Conversion in High-Temperature Power Devices
V. I. Khvesyuk
2016-01-01
Full Text Available Modern trends in development of aircraft engineering go with development of vehicles of the fifth generation. The features of aircrafts of the fifth generation are motivation to use new high-performance systems of onboard power supply. The operating temperature of the outer walls of engines is of 800–1000 K. This corresponds to radiation heat flux of 10 kW/m2 . The thermal energy including radiation of the engine wall may potentially be converted into electricity. The main objective of this paper is to analyze if it is possible to use a high efficiency thermoelectric conversion of heat into electricity. The paper considers issues such as working processes, choice of materials, and optimization of thermoelectric conversion. It presents the analysis results of operating conditions of thermoelectric generator (TEG used in advanced hightemperature power devices. A high-temperature heat source is a favorable factor for the thermoelectric conversion of heat. It is shown that for existing thermoelectric materials a theoretical conversion efficiency can reach the level of 15–20% at temperatures up to 1500 K and available values of Ioffe parameter being ZT = 2–3 (Z is figure of merit, T is temperature. To ensure temperature regime and high efficiency thermoelectric conversion simultaneously it is necessary to have a certain match between TEG power, temperature of hot and cold surfaces, and heat transfer coefficient of the cooling system. The paper discusses a concept of radiation absorber on the TEG hot surface. The analysis has demonstrated a number of potentialities for highly efficient conversion through using the TEG in high-temperature power devices. This work has been implemented under support of the Ministry of Education and Science of the Russian Federation; project No. 1145 (the programme “Organization of Research Engineering Activities”.
2008-01-01
Optimal configuration of a class of endoreversible heat engines with fixed duration,input energy and radiative heat transfer law (q∝Δ(T4)) is determined. The optimal cycle that maximizes the efficiency of the heat engine is obtained by using opti-mal-control theory,and the differential equations are solved by the Taylor series expansion. It is shown that the optimal cycle has eight branches including two isothermal branches,four maximum-efficiency branches,and two adiabatic branches. The interval of each branch is obtained,as well as the solutions of the temperatures of the heat reservoirs and the working fluid. A numerical example is given. The obtained results are compared with those obtained with the Newton’s heat transfer law for the maximum efficiency objective,those with linear phe-nomenological heat transfer law for the maximum efficiency objective,and those with radiative heat transfer law for the maximum power output objective.
SONG HanJiang; CHEN LinGen; SUN FengRui
2008-01-01
Optimal configuration of a class of endoreversible heat engines with fixed duration, input energy and radiative heat transfer law (q∝△(T4)) is determined. The optimal cycle that maximizes the efficiency of the heat engine is obtained by using opti-mal-control theory, and the differential equations are solved by the Taylor series expansion. It is shown that the optimal cycle has eight branches including two isothermal branches, four maximum-efficiency branches, and two adiabatic branches. The interval of each branch is obtained, as well as the solutions of the temperatures of the heat reservoirs and the working fluid. A numerical example is given. The obtained results are compared with those obtained with the Newton's heat transfer law for the maximum efficiency objective, those with linear phe-nomenological heat transfer law for the maximum efficiency objective, and those with radiative heat transfer law for the maximum power output objective.
Theoretical efficiency of nanostructured graphene-based photovoltaics.
Yong, Virginia; Tour, James M
2010-01-01
Graphene-based organic photovoltaics (OPVs) have the potential for single-cell efficiencies exceeding 12% (and 24% in a stacked structure). A generalized equivalent circuit for OPVs is proposed and the validation of the proposed models is verified by simulation. The simulated short-circuit photocurrent density (computed using the simulated incident photon flux density and quantum yield), simulated current-voltage curve, and simulated 3D surface and 2D contour plots of solar-power-conversion efficiency versus carrier mobility and photoactive layer thickness are in good agreement with experimental observations. The results suggest that graphene renders a credible material for the construction of next-generation flexible solar-energy-conversion devices that are low-cost, high-efficiency, thermally stable, environmentally friendly, and lightweight.
Tripathi, Brijesh; Sircar, Ratna
2016-09-01
The maximum performance of nc-Si:H/a-Si:H quantum well solar cell is theoretically evaluated by studying the spectral absorption of incident radiation with respect to the number of inserted nc-Si:H quantum well layers. Fundamental intrinsic properties of a-Si:H and nc-Si:H materials reported in literature have been used to evaluate the performance parameters. Enhanced spectral absorption is recorded due to insertion of nc-Si:H quantum well layers in the intrinsic region of a-Si:H solar cell. By inserting 50 QW layers of nc-Si:H in the intrinsic region of the a-Si:H solar cell, the short-circuit current density (JSC) increases by ∼100% as compared to the baseline whereas the open-circuit voltage (VOC) decreases by ∼38%. The decrease in VOC is explained on the basis of quasi-Fermi level separation under the illuminated state of solar cell. Theoretical maximum efficiency, having the combined effect of the increase in JSC and decrease in VOC, has increased by ∼24% in comparison with the baseline due to the use of QW as calculated using ideal carrier lifetime value. With a realistic carrier lifetime of the state-of-the-art a-Si:H solar cells, the addition of QWs do not yield any significant gain. From this study, it is concluded that a high carrier lifetime is required to gain a noteworthy benefit from the nc-Si:H/a-Si:H QWs.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Theoretical Analysis of Measurement in Operation Efficiency in Optical Cable Transmission Networks
无
2000-01-01
It is necessary to study dynamic operation efficiency of transmission networks in order to realize high intensification of communication networks. The operation efficiency discussed here should exist not only in logic-circuit layer, but also in both path layer and medium layer. A theoretical method of the measurement of layers and comprehensive evaluations is presented based on the concept of transmission efficiency.
Basko, M. M.
2016-08-01
Theoretical investigation has been performed on the conversion efficiency (CE) into the 13.5-nm extreme ultraviolet (EUV) radiation in a scheme where spherical microspheres of tin (Sn) are simultaneously irradiated by two laser pulses with substantially different wavelengths. The low-intensity short-wavelength pulse is used to control the rate of mass ablation and the size of the EUV source, while the high-intensity long-wavelength pulse provides efficient generation of the EUV light at λ=13.5 nm. The problem of full optimization for maximizing the CE is formulated and solved numerically by performing two-dimensional radiation-hydrodynamics simulations with the RALEF-2D code under the conditions of steady-state laser illumination. It is shown that, within the implemented theoretical model, steady-state CE values approaching 9% are feasible; in a transient peak, the maximum instantaneous CE of 11.5% was calculated for the optimized laser-target configuration. The physical factors, bringing down the fully optimized steady-state CE to about one half of the absolute theoretical maximum of CE≈20 % for the uniform static Sn plasma, are analyzed in detail.
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Jacob N. Chung
2014-01-01
Full Text Available Two concept systems that are based on the thermochemical process of high-temperature steam gasification of lignocellulosic biomass and municipal solid waste are introduced. The primary objectives of the concept systems are 1 to develop the best scientific, engineering, and technology solutions for converting lignocellulosic biomass, as well as agricultural, forest and municipal waste to clean energy (pure hydrogen fuel, and 2 to minimize water consumption and detrimental impacts of energy production on the environment (air pollution and global warming. The production of superheated steam is by hydrogen combustion using recycled hydrogen produced in the first concept system while in the second concept system concentrated solar energy is used for the steam production. A membrane reactor that performs the hydrogen separation and water gas shift reaction is involved in both systems for producing more pure hydrogen and CO2 sequestration. Based on obtaining the maximum hydrogen production rate the hydrogen recycled ratio is around 20% for the hydrogen combustion steam heating system. Combined with pure hydrogen production, both high temperature steam gasification systems potentially possess more than 80% in first law overall system thermodynamic efficiencies.
Adzhavenko Maryna M.
2014-02-01
Full Text Available Modern economic conditions put a new problem in front of scientists, namely: capability of an enterprise to survive in the unfavourable external environment. This problem is a system and complex one and its solution is within the plane of management of capital, personnel, development, efficiency, etc. The article marks out that efficiency is a corner stone of the modern economic science, which justifies studies of the gnoseological essence of the efficiency category. The main goal of the article lies in the study of scientific and theoretical grounds of formation of the enterprise development efficiency under modern conditions of the changing internal and external environments. The other goals of the article are identification of the essence of the development efficiency category, deepening the theoretical foundation of assessment of efficiency of enterprise development in the modern economic science. The article conducts an ontological analysis of the essence and goals of the enterprise development efficiency notion, studies evolution of scientific approaches and systemises theoretical provisions of the specified category and their assessment in the economic science. In the result of the study the article identifies a new vector of theoretical grounds and dominating logic of formation of the methodology of assessment of efficiency of enterprises under conditions of innovation development of the state, namely: it underlines principles of systemacy, complexity, self-organisation, significance of human capital as an important factor of increase of efficiency and development. Development of methodological grounds of assessment of efficiency of enterprise innovation development is a prospective direction of further studies.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Izumida, Yuki; Okuda, Koji
2014-05-01
We formulate the work output and efficiency for linear irreversible heat engines working between a finite-sized hot heat source and an infinite-sized cold heat reservoir until the total system reaches the final thermal equilibrium state with a uniform temperature. We prove that when the heat engines operate at the maximum power under the tight-coupling condition without heat leakage the work output is just half of the exergy, which is known as the maximum available work extracted from a heat source. As a consequence, the corresponding efficiency is also half of its quasistatic counterpart.
Maheshwari, Govind; Chaudhary, S; Somani, S.K
2010-01-01
The efficient power, defined as the product of power output and efficiency of the engine, is taken as the objective for performance analysis and optimization of an endoreversible combined Carnot heat...
Sheng, Shiqi; Tu, Z C
2015-02-01
We present a unified perspective on nonequilibrium heat engines by generalizing nonlinear irreversible thermodynamics. For tight-coupling heat engines, a generic constitutive relation for nonlinear response accurate up to the quadratic order is derived from the stalling condition and the symmetry argument. By applying this generic nonlinear constitutive relation to finite-time thermodynamics, we obtain the necessary and sufficient condition for the universality of efficiency at maximum power, which states that a tight-coupling heat engine takes the universal efficiency at maximum power up to the quadratic order if and only if either the engine symmetrically interacts with two heat reservoirs or the elementary thermal energy flowing through the engine matches the characteristic energy of the engine. Hence we solve the following paradox: On the one hand, the quadratic term in the universal efficiency at maximum power for tight-coupling heat engines turned out to be a consequence of symmetry [Esposito, Lindenberg, and Van den Broeck, Phys. Rev. Lett. 102, 130602 (2009); Sheng and Tu, Phys. Rev. E 89, 012129 (2014)]; On the other hand, typical heat engines such as the Curzon-Ahlborn endoreversible heat engine [Curzon and Ahlborn, Am. J. Phys. 43, 22 (1975)] and the Feynman ratchet [Tu, J. Phys. A 41, 312003 (2008)] recover the universal efficiency at maximum power regardless of any symmetry.
Li, Yonghui; Wu, Qiuwei; Zhu, Haiyu
2015-01-01
Based on the benchmark solid oxide fuel cell (SOFC) dynamic model for power system studies and the analysis of the SOFC operating conditions, the nonlinear programming (NLP) optimization method was used to determine the maximum electrical efficiency of the grid-connected SOFC subject...
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik;
2016-01-01
This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms...
Bergboer, N.H; Verdult, V.; Verhaegen, M.H.G.
2002-01-01
We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting parame
Maximum efficiency of steady-state heat engines at arbitrary power.
Ryabov, Artem; Holubec, Viktor
2016-05-01
We discuss the efficiency of a heat engine operating in a nonequilibrium steady state maintained by two heat reservoirs. Within the general framework of linear irreversible thermodynamics we derive a universal upper bound on the efficiency of the engine operating at arbitrary fixed power. Furthermore, we show that a slight decrease of the power below its maximal value can lead to a significant gain in efficiency. The presented analysis yields the exact expression for this gain and the corresponding upper bound.
Ortega-Casanova, Joaquin; Fernandez-Feria, Ramon
2015-11-01
The thrust generated by two heaving plates in tandem is analysed for two particular sets of configurations of interest in forward flight: a plunging leading plate with the trailing plate at rest, and the two plates heaving with the same frequency and amplitude, but varying the phase difference. The thrust efficiency of the leading plate is augmented in relation to a single plate heaving with the same frequency and amplitude in most cases. In the first configuration, we characterize the range of nondimensional heaving frequencies and amplitudes of the leading plate for which the stationary trailing plate contributes positively to the global thrust. The maximum global thrust efficiency, reached for an advance ratio slightly less than unity and a reduced frequency close to 5, is about the same as the maximum efficiency for an isolated plate. But for low frequencies the tandem configuration with the trailing plate at rest is more thrust efficient than the isolated plate. In the second configuration, we find that the maximum thrust efficiency is reached for a phase lag of 180o (counterstroking), particularly for an advance ratio unity and a reduced frequency 4.4, and it is practically the same as in the other configuration and that for a single plate. Supported by the Ministerio de Economía y Competitividad of Spain Grant no. DPI2013-40479-P.
Ruslana Sushko
2015-08-01
Full Text Available Purpose: to identify the factors of efficiency of competitive activity of highly skilled basketball players at the stage of maximum realization of individual potential. Material and Methods: in order to identify the factors that have supported the performance of Ukraine's male national team in the European Championship, data analysis and generalization of scientific and technical literature and online data, analysis of official protocols of competitive activities, analysis and generalization of best pedagogical practices, pedagogical supervision, methods of mathematical statistics were used. Results: the efficiency of competitive activity of basketball players was analyzed using such indicators as team roles, won and lost matches, scored and missed points, technical, tactical and age indicators. Conclusions: the factors of efficiency of competitive activity of highly skilled basketball players at the stage of maximum realization of individual potential were identified with regard to age indicators
Design, Development and Testing of a PC Based One Axis Sun Tracking System for Maximum Efficiency
Sonu AGARWAL
2011-08-01
Full Text Available The solar energy is a clean source of energy and the photo-voltaic (PV solar panel converts the solar radiation into voltage. The PV solar panel produces the maximum power when the incident angle of sunlight is 90°. In the present paper a PC based one axis sun tracking system has been described to keep the PV solar panel perpendicular to the incident sunlight and thus to have maximum solar power utilization. A computer controlled stepper motor has been used in the tracking system to provide motion to the photovoltaic panel. LDR has been used as photo sensor to sense the incident solar radiation. The implementation of the system has been realized by designing optical to electrical signal conversion circuit, analog to digital conversion circuit, motor driving circuit and parallel port interfacing with PC. Experimental results are also included in order to validate the system performance.
Lemofouet, Sylvain; Rufer, Alfred
This paper presents a hybrid energy storage system mainly based on Compressed Air, where the storage and withdrawal of energy are done within maximum efficiency conditions. As these maximum efficiency conditions impose the level of converted power, an intermittent time-modulated operation mode is applied to the thermodynamic converter to obtain a variable converted power. A smoothly variable output power is achieved with the help of a supercapacitive auxiliary storage device used as a filter. The paper describes the concept of the system, the power-electronic interfaces and especially the Maximum Efficiency Point Tracking (MEPT) algorithm and the strategy used to vary the output power. In addition, the paper introduces more efficient hybrid storage systems where the volumetric air machine is replaced by an oil-hydraulics and pneumatics converter, used under isothermal conditions. Practical results are also presented, recorded from a low-power air motor coupled to a small DC generator, as well as from a first prototype of the hydro-pneumatic system. Some economical considerations are also made, through a comparative cost evaluation of the presented hydro-pneumatic systems and a lead acid batteries system, in the context of a stand alone photovoltaic home application. This evaluation confirms the cost effectiveness of the presented hybrid storage systems.
INVESTIGATION OF VEHICLE WHEEL ROLLING WITH MAXIMUM EFFICIENCY IN THE BRAKE MODE
D. Leontev
2011-01-01
Full Text Available Up-to-date vehicles are equipped by various systems of braking effort automatic control theparameters calculation of which do not as a rule have a rational solution. In order to increase theworking efficiency of such systems it is necessary to have the data concerning the impact of variousoperational factors on processes occurring at braking of the object of adjustment (vehicle wheel.Data availability concerning the impact of operational factors allows to decrease geometricalparameters of adjustment devices (modulators and maintain their efficient operation under variousexploitation conditions of vehicle’s motion.
Maximum-Likelihood Detection for Energy-Efficient Timing Acquisition in NB-IoT
2016-01-01
Initial timing acquisition in narrow-band IoT (NB-IoT) devices is done by detecting a periodically transmitted known sequence. The detection has to be done at lowest possible latency, because the RF-transceiver, which dominates downlink power consumption of an NB-IoT modem, has to be turned on throughout this time. Auto-correlation detectors show low computational complexity from a signal processing point of view at the price of a higher detection latency. In contrast a maximum likelihood cro...
Determination of the Maximum Aerodynamic Efficiency of Wind Turbine Rotors with Winglets
Gaunaa, Mac; Johansen, Jeppe [Senior Scientists, Risoe National Laboratory, Roskilde, DK-4000 (Denmark)
2007-07-15
The present work contains theoretical considerations and computational results on the nature of using winglets on wind turbines. The theoretical results presented show that the power augmentation obtainable with winglets is due to a reduction of tip-effects, and is not, as believed up to now, caused by the downwind vorticity shift due to downwind winglets. The numerical work includes optimization of the power coefficient for a given tip speed ratio and geometry of the span using a newly developed free wake lifting line code, which takes into account also viscous effects and self induced forces. Validation of the new code with CFD results for a rotor without winglets showed very good agreement. Results from the new code with winglets indicate that downwind winglets are superior to upwind ones with respect to optimization of Cp, and that the increase in power production is less than what may be obtained by a simple extension of the wing in the radial direction. The computations also show that shorter downwind winglets (>2%) come close to the increase in Cp obtained by a radial extension of the wing. Lastly, the results from the code are used to design a rotor with a 2% downwind winglet, which is computed using the Navier-Stokes solver EllipSys3D. These computations show that further work is needed to validate the FWLL code for cases where the rotor is equipped with winglets.
Toward Improved Rotor-Only Axial Fans—Part II: Design Optimization for Maximum Efficiency
Sørensen, Dan Nørtoft; Thompson, M. C.; Sørensen, Jens Nørkær
2000-01-01
Numerical design optimization of the aerodynamic performance of axial fans is carried out, maximizing the efficiency in a designinterval of flow rates. Tip radius, number of blades, and angular velocity of the rotor are fixed, whereas the hub radius andspanwise distributions of chord length...
Efficient strategies for genome scanning using maximum-likelihood affected-sib-pair analysis
Holmans, P.; Craddock, N. [Univ. of Wales College of Medicine, Cardiff (United Kingdom)
1997-03-01
Detection of linkage with a systematic genome scan in nuclear families including an affected sibling pair is an important initial step on the path to cloning susceptibility genes for complex genetic disorders, and it is desirable to optimize the efficiency of such studies. The aim is to maximize power while simultaneously minimizing the total number of genotypings and probability of type I error. One approach to increase efficiency, which has been investigated by other workers, is grid tightening: a sample is initially typed using a coarse grid of markers, and promising results are followed up by use of a finer grid. Another approach, not previously considered in detail in the context of an affected-sib-pair genome scan for linkage, is sample splitting: a portion of the sample is typed in the screening stage, and promising results are followed up in the whole sample. In the current study, we have used computer simulation to investigate the relative efficiency of two-stage strategies involving combinations of both grid tightening and sample splitting and found that the optimal strategy incorporates both approaches. In general, typing half the sample of affected pairs with a coarse grid of markers in the screening stage is an efficient strategy under a variety of conditions. If Hardy-Weinberg equilibrium holds, it is most efficient not to type parents in the screening stage. If Hardy-Weinberg equilibrium does not hold (e.g., because of stratification) failure to type parents in the first stage increases the amount of genotyping required, although the overall probability of type I error is not greatly increased, provided the parents are used in the final analysis. 23 refs., 4 figs., 5 tabs.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
EFFICIENCY OF ISO 9001 IN PORTUGAL: A QUALITATIVE STUDY FROM A HOLISTIC THEORETICAL PERSPECTIVE
Alcina Dias
2013-03-01
Full Text Available The purpose of this paper is to analy se the efficiency of ISO 9001 from a holistic theoretical approach where the Contingency theory, the Institutional theory and the Resources-Based View are integrated. The study was carried out in companies of different sectors of activity in Portugal, based on a qualitative methodology (interviews. The fact of the interviews having been undertaken under an ISO 9001 structure made it easier for companies to grasp the issues under investigation. An ISO 9001 characterisation was carried out on a theoretical framework approach and findings point out efficiency gains and revealed that the absence of ISO 9001 would work as a competitive disadvantage. The contribution of this research aims to reinforce the state of art as concerns the theoretical scope of analysis of these issues enriched by the case study achievement.
Rechenbach, Björn; Willatzen, Morten; Lassen, Benny
2016-01-01
The electromechanical efficiency of a loaded tubular dielectric elastomer actuator (DEA) is investigated theoretically. In previous studies, the external system, on which the DEA performs mechanical work, is implemented implicitly by prescribing the stroke of the DEA in a closed operation cycle...
A theoretical analysis of optical-to-THz conversion efficiency via optical rectification
2008-01-01
A theoretical analysis of an ultra-short pulse converted to Terahertz radiation via optical rectification in nonlinear optical crystal is presented here;several factors that affect optical-to-THz conversion efficiencies are discussed;pulse durations affect the conversion efficiency effectively:when crystal length is equal to the optimal crystal length lc,optical-to-THz conversion efficiency is the highest,but for the periodically-inverted electro-optic crystals,conversion efficiency is almost proportional to the crystal length when absorption can be neglected.Taking account of the absorption of crystals,effective length of crystal is Leff=0.63/α,there is no apparent increase of conversion efficiency and the conversion efficiency approaches to a constant eventually when the crystal length is increased.
Thien-Tong Nguyen; Doyoung Byun
2008-01-01
In the "modified quasi-steady" approach, two-dimensional (2D) aerodynamic models of flapping wing motions are analyzed with focus on different types of wing rotation and different positions of rotation axis to explain the force peak at the end of each half stroke. In this model, an additional velocity of the mid chord position due to rotation is superimposed on the translational relative velocity of air with respect to the wing. This modification produces augmented forces around the end of eachstroke. For each case of the flapping wing motions with various combination of controlled translational and rotational velocities of the wing along inclined stroke planes with thin figure-of-eight trajectory, discussions focus on lift-drag evolution during one stroke cycle and efficiency of types of wing rotation. This "modified quasi-steady" approach provides a systematic analysis of various parameters and their effects on efficiency of flapping wing mechanism. Flapping mechanism with delayed rotation around quarter-chord axis is an efficient one and can be made simple by a passive rotation mechanism so that it can be useful for robotic application.
A solar photovoltaic system with ideal efficiency close to the theoretical limit.
Zhao, Yuan; Sheng, Ming-Yu; Zhou, Wei-Xi; Shen, Yan; Hu, Er-Tao; Chen, Jian-Bo; Xu, Min; Zheng, Yu-Xiang; Lee, Young-Pak; Lynch, David W; Chen, Liang-Yao
2012-01-02
In order to overcome some physical limits, a solar system consisting of five single-junction photocells with four optical filters is studied. The four filters divide the solar spectrum into five spectral regions. Each single-junction photocell with the highest photovoltaic efficiency in a narrower spectral region is chosen to optimally fit into the bandwidth of that spectral region. Under the condition of solar radiation ranging from 2.4 SUN to 3.8 SUN (AM1.5G), the measured peak efficiency under 2.8 SUN radiation reaches about 35.6%, corresponding to an ideal efficiency of about 42.7%, achieved for the photocell system with a perfect diode structure. Based on the detailed-balance model, the calculated theoretical efficiency limit for the system consisting of 5 single-junction photocells can be about 52.9% under 2.8 SUN (AM1.5G) radiation, implying that the ratio of the highest photovoltaic conversion efficiency for the ideal photodiode structure to the theoretical efficiency limit can reach about 80.7%. The results of this work will provide a way to further enhance the photovoltaic conversion efficiency for solar cell systems in future applications.
Efficient and exact maximum likelihood quantisation of genomic features using dynamic programming.
Song, Mingzhou; Haralick, Robert M; Boissinot, Stéphane
2010-01-01
An efficient and exact dynamic programming algorithm is introduced to quantise a continuous random variable into a discrete random variable that maximises the likelihood of the quantised probability distribution for the original continuous random variable. Quantisation is often useful before statistical analysis and modelling of large discrete network models from observations of multiple continuous random variables. The quantisation algorithm is applied to genomic features including the recombination rate distribution across the chromosomes and the non-coding transposable element LINE-1 in the human genome. The association pattern is studied between the recombination rate, obtained by quantisation at genomic locations around LINE-1 elements, and the length groups of LINE-1 elements, also obtained by quantisation on LINE-1 length. The exact and density-preserving quantisation approach provides an alternative superior to the inexact and distance-based univariate iterative k-means clustering algorithm for discretisation.
Aragon-Gonzalez, G; Leon-Galicia, A; Morales-Gomez, J R
2007-01-01
In this work we include, for the Carnot cycle, irreversibilities of linear finite rate of heat transferences between the heat engine and its reservoirs, heat leak between the reservoirs and internal dissipations of the working fluid. A first optimization of the power output, the efficiency and ecological function of an irreversible Carnot cycle, with respect to: internal temperature ratio, time ratio for the heat exchange and the allocation ratio of the heat exchangers; is performed. For the second and third optimizations, the optimum values for the time ratio and internal temperature ratio are substituted into the equation of power and, then, the optimizations with respect to the cost and effectiveness ratio of the heat exchangers are performed. Finally, a criterion of partial optimization for the class of irreversible Carnot engines is herein presented.
Quantum Coherent Three-Terminal Thermoelectrics: Maximum Efficiency at Given Power Output
Robert S. Whitney
2016-05-01
Full Text Available This work considers the nonlinear scattering theory for three-terminal thermoelectric devices used for power generation or refrigeration. Such systems are quantum phase-coherent versions of a thermocouple, and the theory applies to systems in which interactions can be treated at a mean-field level. It considers an arbitrary three-terminal system in any external magnetic field, including systems with broken time-reversal symmetry, such as chiral thermoelectrics, as well as systems in which the magnetic field plays no role. It is shown that the upper bound on efficiency at given power output is of quantum origin and is stricter than Carnot’s bound. The bound is exactly the same as previously found for two-terminal devices and can be achieved by three-terminal systems with or without broken time-reversal symmetry, i.e., chiral and non-chiral thermoelectrics.
Hapenciuc, C. L.; Borca-Tasciuc, T.; Mihailescu, I. N.
2017-04-01
Thermoelectric materials are used today in thermoelectric devices for heat to electricity(thermoelectric generators-TEG) or electricity to heat(heat pumps) conversion in a large range of applications. In the case of TEGs the final measure of their performance is given by a quantity named the maximum efficiency which shows how much from the heat input is converted into electrical power. Therefore it is of great interest to know correctly how much is the efficiency of a device to can make commercial assessments. The concept of engineering figure of merit, Zeng, and engineering power factor, Peng, were already introduced in the field to quantify the efficiency of a single material under temperature dependent thermoelectric properties, with the mention that the formulas derivation was limited to one leg of the thermoelectric generator. In this paper we propose to extend the concept of engineering figure of merit to a thermoelectric generator by introducing a more general concept of device engineering thermoelectric figure of merit, Zd,eng, which depends on the both TEG materials properties and which shall be the right quantity to be used when we are interested in the evaluation of the efficiency. Also, this work takes into account the electrical contact resistance between the electrodes and thermoelement legs in an attempt to quantify its influence upon the performance of a TEG. Finally, a new formula is proposed for the maximum efficiency of a TEG.
Mixed Ge/Pb perovskite light absorbers with an ascendant efficiency explored from theoretical view.
Sun, Ping-Ping; Li, Quan-Song; Feng, Shuai; Li, Ze-Sheng
2016-06-07
Organic-inorganic methylammonium lead halide perovskites have recently attracted great interest emerging as promising photovoltaic materials with a high 20.8% efficiency, but lead pollution is still a problem that may hinder the development and wide spread of MAPbI3 perovskites. To reduce the use of lead, we investigated the structures, electronic and optical properties of mixed MAGexPb(1-x)I3 theoretically by using density functional theory methods at different calculation levels. Results show that the mixed Ge/Pb perovskites exhibit a monotonic decrease evolution in band energy to push the band gap deeper in the near-infrared region and have a red shift optical absorption with an increased proportion of Ge. The results also indicate that lattice distortion and spin-orbit coupling (SOC) strength play important roles in the band gap behavior of MAGexPb(1-x)I3 by affecting the bandwidths of CBM and VBM. The calculations for short circuit current density, open circuit voltage, and theoretical power conversion efficiency suggest that mixed Ge/Pb perovskite solar cells (PSCs) with efficiency over 22% are superior to MAPbI3 and MAGeI3. And notably, MAGe0.75Pb0.25I3 is a promising harmless material for solar cells absorber with the highest theoretical efficiency of 24.24%. These findings are expected to be helpful for further rational design of nontoxic light absorption layer for high-performance PSCs.
Optimizing WiMAX: Mitigating Co-Channel Interference for Maximum Spectral Efficiency
ABDUL QADIR ANSARI
2016-10-01
Full Text Available The efficient use of radio spectrum is one of the most important issues in wireless networks because spectrum is generally limited and wireless environment is constrained to channel interference. To cope up and for increased usefulness of radio spectrum wireless networks use frequency reuse technique. The frequency reuse technique allows the use of same frequency band in different cells of same network considering inter-cell distance and resulting interference level. WiMAX (Worldwide Interoperability for Microwave Access PHY profile is designed to use FRF (Frequency Reuse Factor of one. When FRF of one is used it results in an improved spectral efficacy but also results in CCI (Co-Channel interference at cell boundaries. The effect of interference is always required to be measured so that some averaging/ minimization techniques may be incorporated to keep the interference level up to some acceptable threshold in wireless environment. In this paper, we have analyzed, that how effectively CCI impact can be mitigated by using different subcarrier permutation types presented in IEEE 802.16 standard. A simulation based analysis is presented wherein impact of using same and different permutation base in adjacent cells in a WiMAX network on CCI, under varying load conditions is analyzed. We have further studied the effect of permutation base in environment where frequency reuse technique is used in conjunction with cell sectoring for better utilization of radio spectrum.
Higuita Cano, Mauricio; Mousli, Mohamed Islam Aniss; Kelouwani, Sousso; Agbossou, Kodjo; Hammoudi, Mhamed; Dubé, Yves
2017-03-01
This work investigates the design and validation of a fuel cell management system (FCMS) which can perform when the fuel cell is at water freezing temperature. This FCMS is based on a new tracking technique with intelligent prediction, which combined the Maximum Efficiency Point Tracking with variable perturbation-current step and the fuzzy logic technique (MEPT-FL). Unlike conventional fuel cell control systems, our proposed FCMS considers the cold-weather conditions, the reduction of fuel cell set-point oscillations. In addition, the FCMS is built to respond quickly and effectively to the variations of electric load. A temperature controller stage is designed in conjunction with the MEPT-FL in order to operate the FC at low-temperature values whilst tracking at the same time the maximum efficiency point. The simulation results have as well experimental validation suggest that propose approach is effective and can achieve an average efficiency improvement up to 8%. The MEPT-FL is validated using a Proton Exchange Membrane Fuel Cell (PEMFC) of 500 W.
Amauris Gilbert-Hernández
2016-05-01
Full Text Available A procedure for the selection of maximum pipe thickness to achieve efficient thermal insulation in piping with steam tracing was developed. The bibliographical review allowed identifying the limitations of previous investigations with regard to the selection of pipe thickness in transfer systems with steam tracing. The model for calculating the overall lost heat was prepared. The procedure considers economic criteria for the selection of pipe thickness and established an optimal thickness value which guarantees a total minimum cost by establishing a balance between the expenditures resulting from heat loss and the project costs.
Gece, Goekhan, E-mail: gokhangc@gmail.co [Department of Physical Chemistry, Faculty of Science, Ankara University, Besevler, 06100 Ankara (Turkey); Bilgic, Semra [Department of Physical Chemistry, Faculty of Science, Ankara University, Besevler, 06100 Ankara (Turkey)
2010-10-15
To clarify the inhibition efficiencies of a total of 12 amino acids for the corrosion of nickel in acidic medium, a density functional theory (DFT) study was carried out using the B3LYP/LANL2DZ method. Quantum chemical descriptors such as the energy of highest occupied molecular orbital (E{sub HOMO}), energy of lowest unoccupied molecular orbital (E{sub LUMO}), and the energy gap ({Delta}E) were calculated. Equations were proposed using linear regression analysis to determine the most effective parameter on inhibition efficiency. The theoretically obtained results were found to be consistent with the experimental data reported.
Thompson, William L.; Lee, Danny C.
2000-11-01
Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.
Nimo, Antwi; Grgic, Dario; Reindl, Leonhard M.
2012-04-01
This work presents the optimization of radio frequency (RF) to direct current (DC) circuits using Schottky diodes for remote wireless energy harvesting applications. Since different applications require different wireless RF to DC circuits, RF harvesters are presented for different applications. Analytical parameters influencing the sensitivity and efficiency of the circuits are presented. Results showed in this report are analytical, simulated and measured. The presented circuits operate around the frequency 434 MHz. The result of an L-matched RF to DC circuit operates at a maximum efficiency of 27 % at -35 dBm input. The result of a voltage multiplier achieves an open circuit voltage of 6 V at 0 dBm input. The result of a broadband circuit with a frequency band of 300 MHz, achieves an average efficiency of 5 % at -30 dBm and open circuit voltage of 47 mV. A high quality factor (Q) circuit is also realized with a PI network matching for narrow band applications.
Mehrotra, Shakti; Prakash, O; Khan, Feroz; Kukreja, A K
2013-02-01
KEY MESSAGE : ANN-based combinatorial model is proposed and its efficiency is assessed for the prediction of optimal culture conditions to achieve maximum productivity in a bioprocess in terms of high biomass. A neural network approach is utilized in combination with Hidden Markov concept to assess the optimal values of different environmental factors that result in maximum biomass productivity of cultured tissues after definite culture duration. Five hidden Markov models (HMMs) were derived for five test culture conditions, i.e. pH of liquid growth medium, volume of medium per culture vessel, sucrose concentration (%w/v) in growth medium, nitrate concentration (g/l) in the medium and finally the density of initial inoculum (g fresh weight) per culture vessel and their corresponding fresh weight biomass. The artificial neural network (ANN) model was represented as the function of these five Markov models, and the overall simulation of fresh weight biomass was done with this combinatorial ANN-HMM. The empirical results of Rauwolfia serpentina hairy roots were taken as model and compared with simulated results obtained from pure ANN and ANN-HMMs. The stochastic testing and Cronbach's α-value of pure and combinatorial model revealed more internal consistency and skewed character (0.4635) in histogram of ANN-HMM compared to pure ANN (0.3804). The simulated results for optimal conditions of maximum fresh weight production obtained from ANN-HMM and ANN model closely resemble the experimentally optimized culture conditions based on which highest fresh weight was obtained. However, only 2.99 % deviation from the experimental values could be observed in the values obtained from combinatorial model when compared to the pure ANN model (5.44 %). This comparison showed 45 % better potential of combinatorial model for the prediction of optimal culture conditions for the best growth of hairy root cultures.
WANG Yang; TU Zhan-Chun
2013-01-01
The Carnot-like heat engines are classified into three types (normal-,sub-and,super-dissipative) according to relations between the minimum irreversible entropy production in the "isothermal" processes and the time for completing those processes.The efficiencies at maximum power of normal-,sub-and super-dissipative Carnot-like heat engines are proved to be bounded between ηc/2 and ηc/ (2-ηc),ηc/2 and ηc,0 and ηc/ (2-ηc),respectively.These bounds are also shared by linear,sub-and super-linear irreversible Carnot-like engines [Tu and Wang,Europhys.Lett.98 (2012) 40001] although the dissipative engines and the irreversible ones are inequivalent to each other.
Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding
Ukhanova, Ann; Milani, Simone; Forchhammer, Søren
2013-01-01
This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...... profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement...
Lee, JongHyup; Pak, Dohyun
2016-08-29
For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.
JongHyup Lee
2016-08-01
Full Text Available For practical deployment of wireless sensor networks (WSN, WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.
Leclercq, C; Arcella, D; Turrini, A
2000-12-01
The three recent EU directives which fixed maximum permitted levels (MPL) for food additives for all member states also include the general obligation to establish national systems for monitoring the intake of these substances in order to evaluate their use safety. In this work, we considered additives with primary antioxidant technological function for which an acceptable daily intake (ADI) was established by the Scientific Committee for Food (SCF): gallates, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and erythorbic acid. The potential intake of these additives in Italy was estimated by means of a hierarchical approach using, step by step, more refined methods. The likelihood of the current ADI to be exceeded was very low for erythorbic acid, BHA and gallates. On the other hand, the theoretical maximum daily intake (TMDI) of BHT was above the current ADI. The three food categories found to be main potential sources of BHT were "pastry, cake and biscuits", "chewing gums" and "vegetables oils and margarine"; they overall contributed 74% of the TMDI. Actual use of BHT in these food categories is discussed, together with other aspects such as losses of this substance in the technological process and percentage of ingestion in the case of chewing gums.
A theoretical model for calculation of the detective quantum efficiency in granular scintillators
Cavouras, D. E-mail: cavouras@hol.gr; Kandarakis, I.; Tsoukos, S.; Kateris, A.; Nomicos, C.D.; Panayiotakis, G.S
2001-11-01
A theoretical model has been developed for calculating the detective quantum efficiency (DQE) of scintillators, by taking into account the internal structure of granular scintillators often used in medical imaging detectors. Scintillators were considered to consist of N elementary thin layers containing spherical scintillating grains of equal size. Grains were assumed to be regularly distributed within each thin layer, the thickness of the latter being equal to the grain diameter. Values of the X-ray absorption and X-ray attenuation coefficients, of the intrinsic X-ray to light conversion efficiency and of the optical scattering and absorption coefficients were used as input data to the model. Optical scattering and optical absorption coefficients were determined by fitting the model to experimental luminescence data. The model was employed to calculate the detective quantum efficiency of La{sub 2}O{sub 2}S:Tb, Y{sub 2}O{sub 2}S:Tb, Y{sub 2}O{sub 2}S:Eu, ZnSCdS:Ag, ZnSCdS:Au,Cu scintillators. Results of the calculations were found close to values published in previous studies.
A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay Constraints
Meshkati, Farhad; Poor, H Vincent; Schwartz, Stuart C
2007-01-01
A game-theoretic framework is used to study the effect of constellation size on the energy efficiency of wireless networks for M-QAM modulation. A non-cooperative game is proposed in which each user seeks to choose its transmit power (and possibly transmit symbol rate) as well as the constellation size in order to maximize its own utility while satisfying its delay quality-of-service (QoS) constraint. The utility function used here measures the number of reliable bits transmitted per joule of energy consumed, and is particularly suitable for energy-constrained networks. The best-response strategies and Nash equilibrium solution for the proposed game are derived. It is shown that in order to maximize its utility (in bits per joule), a user must choose the lowest constellation size that can accommodate the user's delay constraint. Using this framework, the tradeoffs among energy efficiency, delay, throughput and constellation size are also studied and quantified. The effect of trellis-coded modulation on energy...
Arienzo Loredana
2010-01-01
Full Text Available The problem of collaborative tracking of mobile nodes in wireless sensor networks is addressed. By using a novel metric derived from the energy model in LEACH (W.B. Heinzelman, A.P. Chandrakasan and H. Balakrishnan, Energy-Efficient Communication Protocol for Wireless Microsensor Networks, in: Proceedings of the 33rd Hawaii International Conference on System Sciences (HICSS '00, 2000 and aiming at an efficient resource solution, the approach adopts a strategy of combining target tracking with node selection procedures in order to select informative sensors to minimize the energy consumption of the tracking task. We layout a cluster-based architecture to address the limitations in computational power, battery capacity and communication capacities of the sensor devices. The computation of the posterior Cramer-Rao bound (PCRB based on received signal strength measurements has been considered. To track mobile nodes two particle filters are used: the bootstrap particle filter and the unscented particle filter, both in the centralized and in the distributed manner. Their performances are compared with the theoretical lower bound PCRB. To save energy, a node selection procedure based on greedy algorithms is proposed. The node selection problem is formulated as a cross-layer optimization problem and it is solved using greedy algorithms.
Várnai, Csilla; Burkoff, Nikolas S; Wild, David L
2013-12-10
Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/.
Elhkim, Mostafa Ould; Héraud, Fanny; Bemrah, Nawel; Gauchard, Françoise; Lorino, Tristan; Lambré, Claude; Frémy, Jean Marc; Poul, Jean-Michel
2007-04-01
Tartrazine is an artificial azo dye commonly used in human food and pharmaceutical products. Since the last assessment carried out by the JECFA in 1964, many new studies have been conducted, some of which have incriminated tartrazine in food intolerance reactions. The aims of this work are to update the hazard characterization and to revaluate the safety of tartrazine. Our bibliographical review of animal studies confirms the initial hazard assessment conducted by the JECFA, and accordingly the ADI established at 7.5mg/kg bw. From our data, in France, the estimated maximum theoretical intake of tartrazine in children is 37.2% of the ADI at the 97.5th percentile. It may therefore be concluded that from a toxicological point of view, tartrazine does not represent a risk for the consumer. It appears more difficult to show a clear relationship between ingestion of tartrazine and the development of intolerance reactions in patients. These reactions primarily occur in patients who also suffer from recurrent urticaria or asthma. The link between tartrazine consumption and these reactions is often overestimated, and the pathogenic mechanisms remain poorly understood. The prevalence of tartrazine intolerance is estimated to be less than 0.12% in the general population. Generally, the population at risk is aware of the importance of food labelling, with the view of avoiding consumption of tartrazine. However, it has to be mentioned that products such as ice creams, desserts, cakes and fine bakery are often sold loose without any labelling.
Rajagopal, Adharsh; Yang, Zhibin; Jo, Sae Byeok; Braly, Ian L; Liang, Po-Wei; Hillhouse, Hugh W; Jen, Alex K-Y
2017-09-01
Organic-inorganic hybrid perovskite multijunction solar cells have immense potential to realize power conversion efficiencies (PCEs) beyond the Shockley-Queisser limit of single-junction solar cells; however, they are limited by large nonideal photovoltage loss (V oc,loss ) in small- and large-bandgap subcells. Here, an integrated approach is utilized to improve the V oc of subcells with optimized bandgaps and fabricate perovskite-perovskite tandem solar cells with small V oc,loss . A fullerene variant, Indene-C60 bis-adduct, is used to achieve optimized interfacial contact in a small-bandgap (≈1.2 eV) subcell, which facilitates higher quasi-Fermi level splitting, reduces nonradiative recombination, alleviates hysteresis instabilities, and improves V oc to 0.84 V. Compositional engineering of large-bandgap (≈1.8 eV) perovskite is employed to realize a subcell with a transparent top electrode and photostabilized V oc of 1.22 V. The resultant monolithic perovskite-perovskite tandem solar cell shows a high V oc of 1.98 V (approaching 80% of the theoretical limit) and a stabilized PCE of 18.5%. The significantly minimized nonideal V oc,loss is better than state-of-the-art silicon-perovskite tandem solar cells, which highlights the prospects of using perovskite-perovskite tandems for solar-energy generation. It also unlocks opportunities for solar water splitting using hybrid perovskites with solar-to-hydrogen efficiencies beyond 15%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Gallego, J. L.; Minotti, F.; Grondona, D.
2014-05-01
An experimental and theoretical study is presented on the efficiency of the removal of NO in a N2 atmosphere in a novel three-electrode reactor. This reactor combines a dielectric-barrier discharge with a corona discharge, designed to enhance streamer propagation in a relatively large region. Experimentally, the reactor has a good energy yield for the removal of NO, as compared with other discharge methods. A theoretical model is developed for the production of reactive species in the streamers by different reactions that allow to relate simple electrical measurements with the reactor efficiency. This theoretical efficiency resulted in good agreement with the experimental one, validating the model and allowing the evaluation of the contribution of different reactions involved in NO removal.
Kangawa, Yoshihiro; Ito, Tomonori; Koukitu, Akinori; Kakimoto, Koichi
2014-10-01
The surface stability, growth process, and structural stability of InGaN and InN are reviewed from a theoretical viewpoint. In 2001, a new theoretical approach based on an ab initio calculation was developed. This theoretical approach enables the investigation of the influence of growth conditions such as partial pressure and temperature on the surface stability. The theoretical approach is applied to the research on the In incorporation efficiency in InGaN grown on nonpolar and semipolar surfaces. The calculation results suggest that the N-H layer formed on such surfaces has a crucial role in In incorporation. Moreover, the structural stability of InN grown by pressurized-reactor MOVPE is reviewed. It was found by the theoretical approach that \\{ 1\\bar{1}\\bar{1}\\} facet formation causes the spontaneous formation of islands with the zinc-blende structure.
Energy-Efficient Resource Allocation in Wireless Networks: An Overview of Game-Theoretic Approaches
Meshkati, Farhad; Schwartz, Stuart C
2007-01-01
An overview of game-theoretic approaches to energy-efficient resource allocation in wireless networks is presented. Focusing on multiple-access networks, it is demonstrated that game theory can be used as an effective tool to study resource allocation in wireless networks with quality-of-service (QoS) constraints. A family of non-cooperative (distributed) games is presented in which each user seeks to choose a strategy that maximizes its own utility while satisfying its QoS requirements. The utility function considered here measures the number of reliable bits that are transmitted per joule of energy consumed and, hence, is particulary suitable for energy-constrained networks. The actions available to each user in trying to maximize its own utility are at least the choice of the transmit power and, depending on the situation, the user may also be able to choose its transmission rate, modulation, packet size, multiuser receiver, multi-antenna processing algorithm, or carrier allocation strategy. The best-respo...
Dukka, Bahadur K C; Akutsu, Tatsuya; Tomita, Etsuji; Seki, Tomokazu; Fujiyama, Asao
2002-01-01
We developed maximum clique-based algorithms for spot matching for two-dimensional gel electrophoresis images, protein structure alignment and protein side-chain packing, where these problems are known to be NP-hard. Algorithms based on direct reductions to the maximum clique can find optimal solutions for instances of size (the number of points or residues) up to 50-150 using a standard PC. We also developed pre-processing techniques to reduce the sizes of graphs. Combined with some heuristics, many realistic instances can be solved approximately.
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik
2016-01-01
. This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...
Kuznetsova M.M.
2014-08-01
Full Text Available The article presents results of theoretical and experimental research of grinding process of bulk materials in a ball mill. The new method of determination of energy efficiently mode of operation of ball mills in a process of a cement clinker grinding is proposed and experimentally tested.
Theoretical-game estimate of radiosystem's efficiency based on entropy approach
Marigodov, V. K.
2011-01-01
Theoretical-game synthesis of radio communications system in a conflict situation of interaction between radio communications system and radio masking system operators taking into consideration information limitations that are imposed on differential entropies of players’ mixed strategies.
Kukush, Alexander; Schneeweiss, Hans
2004-01-01
We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.
Ye, Zhuo-Lin; Li, Wei-Sheng; Lai, Yi-Ming; He, Ji-Zhou; Wang, Jian-Hui
2015-12-01
We propose a quantum-mechanical Brayton engine model that works between two superposed states, employing a single particle confined in an arbitrary power-law trap as the working substance. Applying the superposition principle, we obtain the explicit expressions of the power and efficiency, and find that the efficiency at maximum power is bounded from above by the function: η+ = θ/(θ + 1), with θ being a potential-dependent exponent. Supported by the National Natural Science Foundation of China under Grant Nos. 11505091, 11265010, and 11365015, and the Jiangxi Provincial Natural Science Foundation under Grant No. 20132BAB212009
Radovcich, N. A.; Dreim, D.; Okeefe, D. A.; Linner, L.; Pathak, S. K.; Reaser, J. S.; Richardson, D.; Sweers, J.; Conner, F.
1985-01-01
Work performed in the design of a transport aircraft wing for maximum fuel efficiency is documented with emphasis on design criteria, design methodology, and three design configurations. The design database includes complete finite element model description, sizing data, geometry data, loads data, and inertial data. A design process which satisfies the economics and practical aspects of a real design is illustrated. The cooperative study relationship between the contractor and NASA during the course of the contract is also discussed.
Kaiadi, Mehrzad; Tunestål, Per; Johansson, Bengt
2010-01-01
High EGR rates combined with turbocharging has been identified as a promising way to increase the maximum load and efficiency of heavy duty spark ignition Natural Gas engines. With stoichiometric conditions a three way catalyst can be used which means that regulated emissions can be kept at very low levels. Most of the heavy duty NG engines are diesel engines which are converted for SI operation. These engine's components are in common with the diesel-engine which put limits on higher exh...
Haller, Michel; Cruickshank, Chynthia; Streicher, Wolfgang;
2009-01-01
This paper reviews different methods that have been proposed to characterize thermal stratification in energy storages from a theoretical point of view. Specifically, this paper focuses on the methods that can be used to determine the ability of a storage to promote and maintain stratification...
Sniegowski, Kristel; Bers, Karolien; Ryckeboer, Jaak; Jaeken, Peter; Spanoghe, Pieter; Springael, Dirk
2012-08-01
Addition of pesticide-primed soil containing adapted pesticide degrading bacteria to the biofilter matrix of on farm biopurification systems (BPS) which treat pesticide contaminated wastewater, has been recommended, in order to ensure rapid establishment of a pesticide degrading microbial community in BPS. However, uncertainties exist about the minimal soil inoculum density needed for successful bioaugmentation of BPS. Therefore, in this study, BPS microcosm experiments were initiated with different linuron primed soil inoculum densities ranging from 0.5 to 50 vol.% and the evolution of the linuron mineralization capacity in the microcosms was monitored during feeding with linuron. Successful establishment of a linuron mineralization community in the BPS microcosms was achieved with all inoculum densities including the 0.5 vol.% density with only minor differences in the time needed to acquire maximum degradation capacity. Moreover, once established, the robustness of the linuron degrading microbial community towards expected stress situations proved to be independent of the initial inoculum density. This study shows that pesticide-primed soil inoculum densities as low as 0.5 vol.% can be used for bioaugmentation of a BPS matrix and further supports the use of BPS for treatment of pesticide-contaminated wastewater at farmyards.
Theoretical and empirical approaches to using films as a means to increase communication efficiency.
Kiselnikova, N.V.
2016-07-01
Full Text Available The theoretical framework of this analytic study is based on studies in the field of film perception. Films are considered as a communicative system that is encrypted in an ordered series of shots, and decoding proceeds during perception. The shots are the elements of a cinematic message that must be “read” by viewer. The objective of this work is to analyze the existing theoretical approaches to using films in psychotherapy and education. An original approach to film therapy that is based on teaching clients to use new communicative sets and psychotherapeutic patterns through watching films is presented. The article specifies the main emphasized points in theories of film therapy and education. It considers the specifics of film therapy in the process of increasing the effectiveness of communication. It discusses the advantages and limitations of the proposed method. The contemporary forms of film therapy and the formats of cinema clubs are criticized. The theoretical assumptions and empirical research that could be used as a basis for a method of developing effective communication by means of films are discussed. Our studies demonstrate that the usage of film therapy must include an educational stage for more effective and stable results. This means teaching viewers how to recognize certain psychotherapeutic and communicative patterns in the material of films, to practice the skill of finding as many examples as possible for each pattern and to transfer the acquired schemes of analyzing and recognizing patterns into one’s own life circumstances. The four stages of the film therapeutic process as well as the effects that are achieved at each stage are described in detail. In conclusion, the conditions under which the usage of the film therapy method would be the most effective are observed. Various properties of client groups and psychotherapeutic scenarios for using the method of active film therapy are described.
Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi
2017-01-01
This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf)—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only. PMID:28133549
Usa, Hideyuki; Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi
2017-01-01
This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf )-the static muscular moment to support a limb segment against gravity-from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm ) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.
Hideyuki Usa
2017-01-01
Full Text Available This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm was calculated. Body weight and limb segment length (thigh and lower leg length were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.
Theoretical Investigations on the Efficiency and the Conditions for the Realization of Jet Engines
Roy, Maurice
1950-01-01
Contents: Preliminary notes on the efficiency of propulsion systems; Part I: Propulsion systems with direct axial reaction rockets and rockets with thrust augmentation; Part II: Helicoidal reaction propulsion systems; Appendix I: Steady flow of viscous gases; Appendix II: On the theory of viscous fluids in nozzles; and Appendix III: On the thrusts augmenters, and particularly of gas augmenters
Oikonomou, V.; Jepma, C.J.; Becchis, F.; Russolillo, D.
2008-01-01
In this paper we analyze interactions of two energy policy instruments, namely a White Certificates (WhC) scheme as an innovative policy instrument for energy efficiency improvement and energy taxation. These policy instruments differ in terms of objectives and final impacts on the price of electric
Ramachandran, Hema; Pillai, K. P. P.; Bindu, G. R.
2016-08-01
A two-port network model for a wireless power transfer system taking into account the distributed capacitances using PP network topology with top coupling is developed in this work. The operating and maximum power transfer efficiencies are determined analytically in terms of S-parameters. The system performance predicted by the model is verified with an experiment consisting of a high power home light load of 230 V, 100 W and is tested for two forced resonant frequencies namely, 600 kHz and 1.2 MHz. The experimental results are in close agreement with the proposed model.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Barrera, Manuel; Suarez-Llorens, Alfonso; Casas-Ruiz, Melquiades; Alonso, José J.; Vidal, Juan
2017-05-01
A generic theoretical methodology for the calculation of the efficiency of gamma spectrometry systems is introduced in this work. The procedure is valid for any type of source and detector and can be applied to determine the full energy peak and the total efficiency of any source-detector system. The methodology is based on the idea of underlying probability of detection, which describes the physical model for the detection of the gamma radiation at the particular studied situation. This probability depends explicitly on the direction of the gamma radiation, allowing the use of this dependence the development of more realistic and complex models than the traditional models based on the point source integration. The probability function that has to be employed in practice must reproduce the relevant characteristics of the detection process occurring at the particular studied situation. Once the probability is defined, the efficiency calculations can be performed in general by using numerical methods. Monte Carlo integration procedure is especially useful to perform the calculations when complex probability functions are used. The methodology can be used for the direct determination of the efficiency and also for the calculation of corrections that require this determination of the efficiency, as it is the case of coincidence summing, geometric or self-attenuation corrections. In particular, we have applied the procedure to obtain some of the classical self-attenuation correction factors usually employed to correct for the sample attenuation of cylindrical geometry sources. The methodology clarifies the theoretical basis and approximations associated to each factor, by making explicit the probability which is generally hidden and implicit to each model. It has been shown that most of these self-attenuation correction factors can be derived by using a common underlying probability, having this probability a growing level of complexity as it reproduces more precisely
Efficient Theoretical Screening of Solid Sorbents for CO2 Capture Applications*
Duan, Yuhua; Luebke, David; Pennline, Henry
2012-03-31
By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials has been proposed and validated. The ab initio thermodynamic technique has the advantage of allowing identification of thermodynamic properties of CO2 capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. For a given solid, the first step is to attempt to extract thermodynamic properties from thermodynamic databases and the available literatures. If the thermodynamic properties of the compound of interest are unknown, an ab initio thermodynamic approach is used to calculate them. These properties expressed conveniently as chemical potentials and heat of reactions, which obtained either from databases or from calculations, are further used for computing the thermodynamic reaction equilibrium properties of the CO2 absorption/desorption cycles. Only those solid materials for which lower capture energy costs are predicted at the desired process conditions are selected as CO2 sorbent candidates and are further considered for experimental validations. Solid sorbents containing alkali and alkaline earth metals have been reported in several previous studies to be good candidates for CO2 sorbent applications due to their high CO2 absorption capacity at moderate working temperatures. In addition to introducing our computational screening procedure, in this presentation we will summarize our results for solid systems composed by alkali and alkaline earth metal oxides, hydroxides, and carbon- ates/bicarbonates to validate our methodology. Additionally, applications of our computational method to mixed solid systems of Li2O with SiO2/ZrO2 with different mixing ratios, our preliminary results showed that increasing the Li2O/SiO2 ratio in
Busarov, S. S.; Vasil'ev, V. K.; Busarov, I. S.; Titov, D. S.; Panin, Ju. N.
2017-08-01
Developed earlier and tested in such working fluid as air, the technology of calculating the operating processes of slow-speed long-stroke reciprocating stages let the authors to obtain successful results concerning compression of gases to medium pressures in one stage. In this connection, the question of the efficiency of the application of slow-speed long-stroke stages in various fields of technology and the national economy, where the working fluid is other gas or gas mixture, is topical. The article presents the results of the efficiency evaluation of single-stage compressor units on the basis of such stages for cases when ammonia, hydrogen, helium or propane-butane mixture is used as the working fluid.
2012-01-01
p-n junction solar cells, Journal of Applied Physics 32 (1961) 510–519. [2] C.H. Henry, Limiting efficiencies of ideal single and multiple energy gap...terrestrial solar cells, Journal of Applied Physics 51 (1980) 4494–4500. [3] M. Wolf, Limitations and possibilities for improvement of photovoltaic...multiplication absorbers, Journal of Applied Physics 100 (2006) 074510–074517. [6] J.A. Mcquire, J. Joo, J.M. Pietryga, R.D. Schaller, V.I. Klimov
Abdul Hadi, Sabina; Fitzgerald, Eugene A.; Nayfeh, Ammar
2016-02-01
Here we present detailed balance efficiency limit for a novel two-terminal dual and triple junction "step-cell" under AM 1.5G and AM 0 incident spectrums. The step-cell is a multi-junction (MJ) solar cell in which part of the top cell is removed, exposing some of the bottom cell area to unfiltered incident light, thus increasing bottom cell's photogenerated current. Optical generation of the bottom cell is modeled in two parts: step part, limited by the bottom cell bandgap, and conventional part, additionally limited by the top cell absorption. Our results show that conventionally designed MJ cell with optimized bandgap combination of 1.64 eV/0.96 eV for dual junction and 1.91 eV/1.37 eV/0.93 eV for triple junction has the highest theoretical efficiency limit. However, the step-cell design provides significant efficiency improvement for cells with non-optimum bandgap values. For example, for 1.41 eV ( ˜GaAs)/Si dual junction under AM 1.5G, efficiency limit increases from ˜21% in a conventional design to 38.7% for optimized step-cell. Similar benefits are observed for three-junction step-cell and for AM 0 spectrum studied here. Step-cell relaxes bandgap requirements for efficient MJ solar cells, providing an opportunity for a wider selection of materials and cost reduction.
Wen, Long; Chen, Qin; Sun, Fuhe; Song, Shichao; Jin, Lin; Yu, Yan
2014-11-13
Solar cells incorporated with multi-coloring capability not only offer an aesthetic solution to bridge the gap between solar modules and building decorations but also open up the possibility for self-powered colorful display. In this paper, we proposed a multi-colored semi-transparent organic solar cells (TOSCs) design containing metallic nanostructures with the both high color purity and efficiency based on theoretical considerations. By employing guided mode resonance effect, the multi-colored TOSC behave like an efficient color filter that selectively transmits light with the desired wavelengths and generates electricity with light of other wavelengths. Broad range of coloring and luminosity adjusting for the transmission light can be achieved by simply tuning the period and the duty cycle of the metallic nanostructures. Furthermore, accompanying with the efficient color filtering characteristics, the optical absorption of TOSCs was improved due to the marked suppression of transmission loss at the off-resonance wavelengths and the increased light trapping in TOSCs. The mechanisms of the light guiding in photoactive layer and broadband backward scattering from the metallic nanostructures were identified to make an essential contribution to the improved light-harvesting. By enabling efficient color control and high efficiency simultaneously, this approach holds great promise for future versatile photovoltaic energy utilization.
M. Girotto
2012-06-01
Full Text Available Esta pesquisa teve como objetivo avaliar a velocidade e intensidade de ação do hexazinone isolado e em mistura com outros inibidores do fotossistema II, através da eficiência fotossintética de Panicum maximum em pós-emergência. O ensaio foi constituído de seis tratamentos: hexazinone (250 g ha-1, tebuthiuron (1,0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1 + 0,5 kg ha-1, diuron (2.400 g ha-1, hexazinone + diuron (125 + 1.200 g ha-1, metribuzin (1.440 g ha-1, hexazinone + metribuzin (125 + 720 g ha-1 e uma testemunha. O experimento foi instalado em delineamento inteiramente casualizado, com quatro repetições. Após a aplicação dos tratamentos, as plantas foram transportadas para casa de vegetação sob condições controladas de temperatura e umidade, onde ficaram durante o período experimental, sendo realizadas as seguintes avaliações: taxa de transporte de elétrons e análise visual de intoxicação. A avaliação com o fluorômetro foi realizada nos intervalos de 1, 2, 6, 24, 48, 72, 120 e 168 horas após a aplicação, e as avaliações visuais, aos três e sete dias após a aplicação. Os resultados demonstraram diferença nos tratamentos, enfatizando a aplicação do diuron, que reduziu lentamente o transporte de elétrons comparado com os outros herbicidas e, em mistura com hexazinone, apresentou efeito sinérgico. Verificou-se com o uso do fluorômetro a intoxicação antecipada em plantas de P. maximum após a aplicação de herbicidas inibidores do fotossistema II de forma isolada e em mistura.This work aimed to evaluate the speed and intensity of action of hexazinone applied alone and in combination with other photo-system II inhibitors on the photosynthetic efficiency of Panicum maximum in post-emergence. The assay consisted of six treatments: hexazinone (250 g ha-1, tebuthiuron (1.0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1+ 0.5 kg ha-1, diuron (2,400 g ha-1, hexazinone + diuron (125 + 1,200 g ha-1, metribuzin
Dehdab, Maryam; Shahraki, Mehdi; Habibi-Khorassani, Sayyed Mostafa
2016-01-01
Inhibition efficiencies of three amino acids [tryptophan (B), tyrosine (c), and serine (A)] have been studied as green corrosion inhibitors on corrosion of carbon steel using density functional theory (DFT) method in gas and aqueous phases. Quantum chemical parameters such as EH OMO (highest occupied molecular orbital energy), E LUMO (lowest unoccupied molecular orbital energy), hardness (η), polarizability ([Formula: see text]), total negative charges on atoms (TNC), molecular volume (MV) and total energy (TE) have been calculated at the B3LYP level of theory with 6-311++G** basis set. Consistent with experimental data, theoretical results showed that the order of inhibition efficiency is tryptophan (B) > tyrosine (C) > serine (A). In order to determine the possible sites of nucleophilic and electrophilic attacks, local reactivity has been evaluated through Fukui indices.
Theoretical evaluation of the efficiency of catalysts with the aid of deforming forces
Avakyan, V.G.
1983-02-10
The purpose of the present work was to investigate the applicability of deforming forces to the characterization of catalysts and to compare efficiency. A method for calculating the deforming forces appearing as a result of the action of a catalyst on a reactant has been proposed. It has been shown that the magnitudes and directions of the deforming forces correlate with the acceptor properties of the catalyst and can be used for comparing the effects of different catalysts on a reactant. The greater sensitivity of the forces in comparison to the traditional parameters E/sup R//sub AB/ has been demonstrated.
Chimienti, Marianna; Bartoń, Kamil A; Scott, Beth E; Travis, Justin M J
2014-01-01
Foraging in the marine environment presents particular challenges for air-breathing predators. Information about prey capture rates, the strategies that diving predators use to maximise prey encounter rates and foraging success are still largely unknown and difficult to observe. As well, with the growing awareness of potential climate change impacts and the increasing interest in the development of renewable sources it is unknown how the foraging activity of diving predators such as seabirds will respond to both the presence of underwater structures and the potential corresponding changes in prey distributions. Motivated by this issue we developed a theoretical model to gain general understanding of how the foraging efficiency of diving predators may vary according to landscape structure and foraging strategy. Our theoretical model highlights that animal movements, intervals between prey capture and foraging efficiency are likely to critically depend on the distribution of the prey resource and the size and distribution of introduced underwater structures. For multiple prey loaders, changes in prey distribution affected the searching time necessary to catch a set amount of prey which in turn affected the foraging efficiency. The spatial aggregation of prey around small devices (∼ 9 × 9 m) created a valuable habitat for a successful foraging activity resulting in shorter intervals between prey captures and higher foraging efficiency. The presence of large devices (∼ 24 × 24 m) however represented an obstacle for predator movement, thus increasing the intervals between prey captures. In contrast, for single prey loaders the introduction of spatial aggregation of the resources did not represent an advantage suggesting that their foraging efficiency is more strongly affected by other factors such as the timing to find the first prey item which was found to occur faster in the presence of large devices. The development of this theoretical model represents a useful
A Game-Theoretical Approach for Spectrum Efficiency Improvement in Cloud-RAN
Zhuofu Zhou
2016-01-01
Full Text Available As tremendous mobile devices access to the Internet in the future, the cells which can provide high data rate and more capacity are expected to be deployed. Specifically, in the next generation of mobile communication 5G, cloud computing is supposed to be applied to radio access network. In cloud radio access network (Cloud-RAN, the traditional base station is divided into two parts, that is, remote radio heads (RRHs and base band units (BBUs. RRHs are geographically distributed and densely deployed, so as to achieve high data rate and low latency. However, the ultradense deployment inevitably deteriorates spectrum efficiency due to the severer intercell interference among RRHs. In this paper, the downlink spectrum efficiency can be improved through the cooperative transmission based on forming the coalitions of RRHs. We formulate the problem as a coalition formation game in partition form. In the process of coalition formation, each RRH can join or leave one coalition to maximize its own individual utility while taking into account the coalition utility at the same time. Moreover, the convergence and stability of the resulting coalition structure are studied. The numeric simulation result demonstrates that the proposed approach based on coalition formation game is superior to the noncooperative method in terms of the aggregate coalition utility.
Ren, Xin-Yao; Wu, Yong; Wang, Li; Zhao, Liang; Zhang, Min; Geng, Yun; Su, Zhong-Min
2014-06-01
A density functional theory/time-depended density functional theory was used to investigate the synthesized guanidinate-based iridium(III) complex [(ppy)2Ir{(N(i)Pr)2C(NPh2)}] (1) and two designed derivatives (2 and 3) to determine the influences of different cyclometalated ligands on photophysical properties. Except the conventional discussions on geometric relaxations, absorption and emission properties, many relevant parameters, including spin-orbital coupling (SOC) matrix elements, zero-field-splitting parameters, radiative rate constants (kr) and so on were quantitatively evaluated. The results reveal that the replacement of the pyridine ring in the 2-phenylpyridine ligand with different diazole rings cannot only enlarge the frontier molecular orbital energy gaps, resulting in a blue-shift of the absorption spectra for 2 and 3, but also enhance the absorption intensity of 3 in the lower-energy region. Furthermore, it is intriguing to note that the photoluminescence quantum efficiency (ΦPL) of 3 is significantly higher than that of 1. This can be explained by its large SOC value(n=3-4) and large transition electric dipole moment (μS3), which could significantly contribute to a larger kr. Besides, compared with 1, the higher emitting energy (ET1) and smaller (2) value for 3 may lead to a smaller non-radiative decay rate. Additionally, the detailed results also indicate that compared to 1 with pyridine ring, 3 with imidazole ring performs a better hole injection ability. Therefore, the designed complex 3 can be expected as a promising candidate for highly efficient guanidinate-based phosphorescence emitter for OLEDs applications. Copyright © 2014 Elsevier Inc. All rights reserved.
G. D. Liakhevich
2014-01-01
Full Text Available In Belarus concrete with strength up to 60 MPA is used for construction. At the same time high strength concrete with compressive strength above 60 MPA is widely used in all industrially developed countries. High- strength concrete is included in regulatory documents of the European Union and that fact has laid a solid foundation for its application. High strength concrete is produced using highly dispersed silica additives, such as micro-silica and plasticizers (super-plasticizers with a water/cement (w/c ratio not greater than 0.4.Theoretical aspects of high-strength concrete for bridge structures have been studied in the paper. The paper shows a positive impact of highly dispersed additives on structure and physico-mechanical properties of cement compositions, namely: reduction of total porosity of a cement stone in concrete while increasing volumetric concentration and dispersion of a filler; binding of calcium hydroxide with the help of amorphised micro-silica; increased activity of mineral additives during their thin shredding; acceleration of the initial stage of chemical hardening of cement compositions with highly dispersed particle additives that serve as centers of crystallization; “binder-additive” cluster formation due to high surface energy of highly dispersed additive particles; hardening of surface area between a cement stone and aggregates in concrete; high-strength concretes are gaining strength much faster than conventional concretes.Technology of preparation and composition of high-strength concrete using highly dispersed mineral additives and super-plasticizer has been developed in the paper. This concrete will ensure a higher density, wa- ter-and gas tightness, increased resistance to aggressive environment, reduced consumption of concrete and reinforcement, reduced transport and installation weight, increased initial strength, early easing of shutters and preliminary compression, increased length of bridge spans
Efficient energy management: theoretical basis of the financial activity of energy service companies
I.M. Sotnyk
2015-09-01
Full Text Available The aim of this article. The aim of this article is to research the implementation of different types of energy service contracts and financial mechanisms of work of energy service companies (ESCOs in the domestic economic conditions in order to improve the approach to energy management and activate energy effective and resource saving processes in Ukraine. The results of the analysis. High energy consumption in the economy of Ukraine is the result of inefficient use of energy resources which has negative effect on the energy safety of the country, deteriorates the environment and undermines people’s health. The potential to decrease energy consumption in Ukraine is so high that decisive actions in this sphere, as experts claim, can lead to 20-30% decrease of annually consumed energy resources in the country. Consequently, there is a great necessity to activate energy and resource saving by developing energy service activity and creating the chain of specialized ESCOs in Ukraine. ESCO is one of the most effective and widespread organization forms in the world that increases energy efficiency of national economy; its efficiency was proved both in developed countries and those which are developing. The implementation of different types of energy performance contracts of specialized ESCOs was studied in the article. Considering the imperfection of Ukrainian legislation which denies the technology of energy service contracts, the authors offered recommendations concerning changes to some legislative acts. At the same time financial support of ESCOs’ activity is really important for development of energy service market. In this research possible financial mechanisms of ESCOs’ work were analyzed. There is a need to diversify sources of funding of energy saving measures that are adopted by specialized ESCOs by means of attracting funding from state and local budgets, creating conditions for attracting resources of international financial
The Efficiency of a Hybrid Flapping Wing Structure—A Theoretical Model Experimentally Verified
Yuval Keren
2016-07-01
Full Text Available To propel a lightweight structure, a hybrid wing structure was designed; the wing’s geometry resembled a rotor blade, and its flexibility resembled an insect’s flapping wing. The wing was designed to be flexible in twist and spanwise rigid, thus maintaining the aeroelastic advantages of a flexible wing. The use of a relatively “thick” airfoil enabled the achievement of higher strength to weight ratio by increasing the wing’s moment of inertia. The optimal design was based on a simplified quasi-steady inviscid mathematical model that approximately resembles the aerodynamic and inertial behavior of the flapping wing. A flapping mechanism that imitates the insects’ flapping pattern was designed and manufactured, and a set of experiments for various parameters was performed. The simplified analytical model was updated according to the tests results, compensating for the viscid increase of drag and decrease of lift, that were neglected in the simplified calculations. The propelling efficiency of the hovering wing at various design parameters was calculated using the updated model. It was further validated by testing a smaller wing flapping at a higher frequency. Good and consistent test results were obtained in line with the updated model, yielding a simple, yet accurate tool, for flapping wings design.
Koyama, Shinsuke; Paninski, Liam
2010-08-01
A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton-Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.
A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay QoS Constraints
Meshkati, Farhad; Poor, H Vincent; Schwartz, Stuart C
2007-01-01
A game-theoretic framework is used to study the effect of constellation size on the energy efficiency of wireless networks for M-QAM modulation. A non-cooperative game is proposed in which each user seeks to choose its transmit power (and possibly transmit symbol rate) as well as the constellation size in order to maximize its own utility while satisfying its delay quality-of-service (QoS) constraint. The utility function used here measures the number of reliable bits transmitted per joule of energy consumed, and is particularly suitable for energy-constrained networks. The best-response strategies and Nash equilibrium solution for the proposed game are derived. It is shown that in order to maximize its utility (in bits per joule), a user must choose the lowest constellation size that can accommodate the user's delay constraint. This strategy is different from one that would maximize spectral efficiency. Using this framework, the tradeoffs among energy efficiency, delay, throughput and constellation size are ...
Efficient and Low-Cost 3D Structured Light System Based on a Modified Number-Theoretic Approach
Salvi Joaquim
2010-01-01
Full Text Available Abstract 3D scanning based on structured light (SL has been proven to be a powerful tool to measure the three-dimensional shape of surfaces, especially in biomechanics. We define a set of conditions that an optimal SL strategy should fulfill in the case of static scenes and then we present an efficient solution based on improving the number-theoretic approach (NTA. The proposal is compared to the well-known Gray code (GC plus phase shift (PS technique and the original NTA, all satisfying the same set of conditions but obtaining significant improvements with our implementation. The technique is validated in biomechanical applications such as the scanning of a footprint left on a "foam box" typically made for that purpose, where one of the ultimate goals could be the production of a shoe insole.
Efficient and Low-Cost 3D Structured Light System Based on a Modified Number-Theoretic Approach
Tomislav Pribanić
2010-01-01
Full Text Available 3D scanning based on structured light (SL has been proven to be a powerful tool to measure the three-dimensional shape of surfaces, especially in biomechanics. We define a set of conditions that an optimal SL strategy should fulfill in the case of static scenes and then we present an efficient solution based on improving the number-theoretic approach (NTA. The proposal is compared to the well-known Gray code (GC plus phase shift (PS technique and the original NTA, all satisfying the same set of conditions but obtaining significant improvements with our implementation. The technique is validated in biomechanical applications such as the scanning of a footprint left on a “foam box” typically made for that purpose, where one of the ultimate goals could be the production of a shoe insole.
Ogawa, Akira; Anzou, Hideki; Yamamoto, So; Shimagaki, Mituru
2015-11-01
In order to control the maximum tangential velocity Vθm(m/s) of the turbulent rotational air flow and the collection efficiency ηc (%) using the fly ash of the mean diameter XR50=5.57 µm, two secondary jet nozzles were installed to the body of the axial flow cyclone dust collector with the body diameter D1=99mm. Then in order to estimate Vθm (m/s), the conservation theory of the angular momentum flux with Ogawa combined vortex model was applied. The comparisons of the estimated results of Vθm(m/s) with the measured results by the cylindrical Pitot-tube were shown in good agreement. And also the estimated collection efficiencies ηcth (%) basing upon the cut-size Xc (µm) which was calculated by using the estimated Vθ m(m/s) and also the particle size distribution R(Xp) were shown a little higher values than the experimental results due to the re-entrainment of the collected dust. The best method for adjustment of ηc (%) related to the contribution of the secondary jet flow is principally to apply the centrifugal effect Φc (1). Above stated results are described in detail.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
Faramarz eFaghihi
2013-12-01
Full Text Available Fruit flies (Drosophila melanogaster rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the fly's olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons (Kenyon cells was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the system's efficiency will be substantially reduced.
Martí, Sergio; Andrés, Juan; Moliner, Vicent; Silla, Estanislao; Tuñón, Iñaki; Bertrán, Juan
2008-01-01
The Diels-Alder reaction is one of the most important and versatile transformations available to organic chemists for the construction of complex natural products, therapeutics agents, and synthetic materials. Given the lack of efficient enzymes capable of catalyzing this kind of reaction, it is of interest to ask whether a biological catalyst could be designed from an antibody-combining site. In the present work, a theoretical study of the different behavior of a germline catalytic antibody (CA) and its matured form, 39 A-11, that catalyze a Diels-Alder reaction has been carried out. A free-energy perturbation technique based on a hybrid quantum-mechanics/molecular-mechanics scheme, together with internal energy minimizations, has allowed free-energy profiles to be obtained for both CAs. The profiles show a smaller barrier for the matured form, which is in agreement with the experimental observation. Free-energy profiles were obtained with this methodology, thereby avoiding the much more demanding two-dimensional calculations of the energy surfaces that are normally required to study this kind of reaction. Structural analysis and energy evaluations of substrate-protein interactions have been performed from averaged structures, which allows understanding of how the single mutations carried out during the maturation process can be responsible for the observed fourfold enhancement of the catalytic rate constant. The conclusion is that the mutation effect in this studied germline CA produces a complex indirect effect through coupled movements of the backbone of the protein and the substrate.
Wang, Zhiwei; Wang, Zhenling; Jiang, Bo; Zhang, Fan; Li, Peng; Cao, Wei
For typical residential buildings, no-large-scale and large-scale public buildings, according to China's Technical Guide for the Energy Efficiency Labeling of Civil Buildings, makes up missing data of the calculation benchmark and determines the boundary conditions for calculating the theoretical values of civil building energy efficiency. Based on equivalent full load hours method, develops a modular program and calculates building energy consumption for the demands of dynamic cooling and heating and lighting etc., finds out the corresponding relationship between star level's theoretical value of energy saving rate and specified-term limiting value in the Guide. With orthogonal experimental design and multiple linear regression, establishes the quantitative function of both the theoretical value of energy saving rate and main factors parameters, analyzes the impact of the control parameter on energy saving rate, and reveals the law of theoretical value of energy saving rate variation with the control parameter. For building energy efficiency labeling upgrade, presents technical measure need to be taken and analyses its feasibility. The results from the study can provide theoretical guidance for energy-saving design or retrofitting of civil buildings.
唐治德; 徐阳阳; 赵茂; 彭一灵
2015-01-01
By applying lumped parameter circuit theory and coupled mode theory, the efficiency of wire-less power transfer system via magnetic resonant coupling was researched, and the concept of transfer effi-ciency maximum frequency was proposed when transfer efficiency is maximum. Influence of system pa-rameters and load on transfer efficiency maximum frequency and transfer efficiency were analyzed. Two coils transfer system was set up, and the relationship between the frequency and transfer efficiency, the relationship between load and transfer efficiency maximum frequency and between load and transfer effi-ciency were studied,and the relationship between distance and transfer efficiency maximum frequency and between distance and transfer efficiency were carried out. Experiments and simulation prove that: there is a transfer efficiency maximum frequency in wireless power transfer system; and this transfer efficiency maximum frequency is proportional to the load and inversely proportional to mutual inductance approxi-mately; transfer efficiency maximum frequency increases with the increase of distance; when the system work in transfer efficiency maximum frequency and the load resistance is much greater than the coil resist-ance, the transfer efficiency of wireless power transfer system is maximum.%应用集总参数和耦合模理论，研究了电磁耦合式无线电能传输系统的传输效率问题，提出了使无线电能传输系统传输效率最大的传输效率最佳频率概念，分析了传输系统参数和负载对传输效率最佳频率和传输效率的影响。制作了两线圈无线电能传输实验电路，并进行了谐振频率与传输效率的关系，负载与传输效率最佳频率及传输效率的关系，距离与传输效率最佳频率及传输效率的关系实验和仿真分析。实验和仿真分析证明了：无线电能传输系统有一个传输效率最佳频率；传输效率最佳频率近似与负载成正比，与线圈
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Yayama, Tomoe; Kangawa, Yoshihiro; Kakimoto, Koichi
2013-08-01
The effect of growth orientation on In incorporation efficiency in InGaN films grown by metal-organic vapor phase epitaxy (MOVPE) is theoretically investigated. We propose a new theoretical model that explains the role of the surface N-H layer in In incorporation based on first-principles calculations. During III-nitride MOVPE, N-terminated reconstruction with N dangling bonds passivated by H is stable. A surface N-H layer that covers a group-III (In, Ga) atomic layer prevents In atoms from desorbing and being replaced by Ga atoms. In incorporation is therefore more efficient for higher N-H layer coverage and stability. To investigate this relationship, the enthalpy change for the decomposition of a N-H layer was calculated. This enthalpy change which depends on growth orientations is in good agreement with the experimental In content.
Guynn, Mark D.
2015-01-01
There are many trade-offs in aircraft design that ultimately impact the overall performance and characteristics of the final design. One well recognized and well understood trade-off is that of wing weight and aerodynamic efficiency. Higher aerodynamic efficiency can be obtained by increasing wing span, usually at the expense of higher wing weight. The proper balance of these two competing factors depends on the objectives of the design. For example, aerodynamic efficiency is preeminent for sailplanes and long slender wings result. Although the wing weight-drag trade is universally recognized, aerodynamic efficiency and structural efficiency are not usually considered in combination. This paper discusses the concept of "aero-structural efficiency," which combines weight and drag characteristics. A metric to quantify aero-structural efficiency, termed effective L/D, is then derived and tested with various scenarios. Effective L/D is found to be a practical and robust means to simultaneously characterize aerodynamic and structural efficiency in the context of aircraft design. The primary value of the effective L/D metric is as a means to better communicate the combined system level impacts of drag and structural weight.
Theoretical study on optimization of high efficiency GaInP/GaInAs/Ge tandem solar cells
Lin, Gui Jiang; Huang, Sheng Rong; Wu, Jyh Chiarng; Huang, Mei Chun
2009-08-01
This paper investigates which dopping concentration or layer thickness should be used to design practical GaInP/GaInAs/Ge triple-junction cells in order to optimize their performance. A rigorous model includes optical and electrical modules is developed to simulate the external quantumn efficiency, photocurrent and photovoltage of the GaInP/GaInAs/Ge tandem solar cells. It is found that cell efficiency strongly dependend on the top cell thickness and doping concentration at base and emitter layers. Proper structures of the tandem cell operating under AM0 ("air mass zero") illumination are suggested to obtain high efficiency.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Denisov, S. L.; Korolkov, A. I.
2017-07-01
A study of the phenomenon of diffraction of acoustic waves in application to the task of noise shielding by the method of maximum length sequences has been carried out. Rectangular plates and an aircraft model of integrated layout are used as the screens. In the study of noise shielding by aircraft model, the theorem of reciprocity is used. A comparison of experimental results with calculations performed in the framework of the geometrical theory of diffraction (GTD) is performed. On the basis of calculations, the identification of the contributions from different areas of the shielding surface in the full acoustic field is carried out. For the aircraft model, the shielding factor is calculated depending on the frequency.
Sauer, T. [ebm-papst Mulfingen GmbH und Co. KG, Mulfingen (Germany)
2006-03-15
Blowers are often powered by rotary-current asynchronous motors with short-circuit rotors, which are robust, simple and reliable. Today, specifications have become more demanding. For example, economic efficiency and low noise - combined with speed control which again should be as simple as possible - are now required. Asynchronous motors are hardly capable of meeting these requirements, so they are being replaced in many applications by electronically commuted permanent magnet motors, so-called EC drives. (orig.)
Morillon Galvez, David [Comision Nacional para el Ahorro de Energia, Mexico, D. F. (Mexico)
1999-07-01
An analysis of the elements and factors that the architecture of buildings must have to be sustainable, such as: a design adequate to the environment, saving and efficient use of alternate energies, and the auto-supply is presented. In addition a methodology for the natural air conditioning (bioclimatic architecture) of buildings, as well as ideas for the saving and efficient use of energy, with the objective of contributing to the adequate use of components of the building (walls, ceilings, floors etc.), is presented, that when interacting with the environment it takes advantage of it, without deterioration of the same, obtaining energy efficient designs. [Spanish] Se presenta un analisis de los elementos y factores que debe tener la arquitectura de edificios para ser sustentable, como; un diseno adecuado al ambiente, ahorro y uso eficiente de la energia, el uso de energias alternas y el autoabastecimiento. Ademas se propone una metodologia para la climatizacion natural (arquitectura bioclimatica) de edificios, asi como ideas para el ahorro y uso eficiente de energia, con el objetivo de aportar al uso adecuado de componentes del edificio (muros, techos, pisos etc.) que al interactuar con el ambiente tome ventaja de el, sin deterioro del mismo, logrando disenos energeticamente eficientes.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Glanzmann, Livia Noëmi; Mowbray, Duncan John
2016-01-01
The internal quantum efficiency (IQE) of an organic photovoltaic device (OPV) is proportional to the number of free charge carriers generated and their conductivity, per absorbed photon. However, both the IQE and the quantities that determine it, for example, electron-hole binding, charge separation, electron-hole recombination, and conductivity, can only be inferred indirectly from experiments. Using density functional theory, we calculate the excited-state formation energy, charge transfer,...
I.P. van Staveren (Irene)
2009-01-01
textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm
Costa, Rui J.; Wilkinson-Herbots, Hilde
2017-01-01
The isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, it has been reported that the parameter estimates obtained by fitting the IM model are very sensitive to the model’s assumptions—including the assumption of constant gene flow until the present. This article is concerned with the isolation-with-initial-migration (IIM) model, which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used, by means of likelihood-ratio tests, to distinguish between alternative models representing the following divergence scenarios: (a) divergence with potentially asymmetric gene flow until the present, (b) divergence with potentially asymmetric gene flow until some point in the past and in isolation since then, and (c) divergence in complete isolation. We illustrate the procedure on pairs of Drosophila sequences from ∼30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this article. PMID:28193727
乞炳蔚; 王越; 王照成; 张燕平; 徐世昌; 王世昌
2013-01-01
This work is focused on the theoretical investigation of internal leakage of a newly developed pi-lot-scale fluid switcher-energy recovery device (FS-ERD) for reverse osmosis (RO) system. For the purpose of in-creasing FS-ERD efficiency and reducing the operating cost of RO, it is required to control the internal leakage in a low level. In this work, the internal leakage rates at different leakage gaps and retentate brine pressures are investi-gated by computational fluid dynamics (CFD) method and validating experiments. It is found that the internal leak-age has a linear relationship with the retentate brine pressure and a polynomial relationship with the scale of leakage gap. The results of the present work imply that low internal leakage and high retentate brine pressure bring benefits to achieve high FS-ERD efficiency.
Tavangar, Zahra; Zareie, Nazanin
2016-10-01
A series of metal free Tetrathienoacene-based (TTA-based) organic dyes are designed and investigated as sensitizers for application in dye sensitized solar cells (DSSCs). Density function theory and time dependent density function theory calculations were performed on these dyes at vacuum and orthodichlorobenzene as the solvent. Effects of changing π-conjugation bridges and different functional groups in acceptor and donor units were investigated. UV-Vis absorption spectra were simulated to show the wavelength shifting and absorption properties. Inserting nitro and acyl chloride functional groups in acceptor and NH2 in donor units leads to the reduction of HOMO-LUMO gap by lowering the lowest unoccupied molecular orbital (LUMO) energy level and raising the highest occupied molecular orbital (HOMO) energy level and the increase in effective parameters in DSSC' efficiency. The results show that changing spacer units from thiophene to furan has a great effect on electronic structure and absorption spectra. Investigation of the electron distributions of frontier orbitals shows the HOMO and LUMO localization in donor and acceptor, respectively. Some key parameters that were studied here include light harvesting efficiency, free energy of electron injection and open circuit photo-voltage.
Piyush Sabharwall; Fred Gunnerson; Akira Tokuhiro; Vivek Utgiker; Kevan Weaver; Steven Sherman
2007-10-01
The work reported here is the preliminary analysis of two-phase Thermosyphon heat transfer performance with various alkali metals. Thermosyphon is a device for transporting heat from one point to another with quite extraordinary properties. Heat transport occurs via evaporation and condensation, and the heat transport fluid is re-circulated by gravitational force. With this mode of heat transfer, the thermosyphon has the capability to transport heat at high rates over appreciable distances, virtually isothermally and without any requirement for external pumping devices. For process heat, intermediate heat exchangers (IHX) are required to transfer heat from the NGNP to the hydrogen plant in the most efficient way possible. The production of power at higher efficiency using Brayton Cycle, and hydrogen production requires both heat at higher temperatures (up to 1000oC) and high effectiveness compact heat exchangers to transfer heat to either the power or process cycle. The purpose for selecting a compact heat exchanger is to maximize the heat transfer surface area per volume of heat exchanger; this has the benefit of reducing heat exchanger size and heat losses. The IHX design requirements are governed by the allowable temperature drop between the outlet of the NGNP (900oC, based on the current capabilities of NGNP), and the temperatures in the hydrogen production plant. Spiral Heat Exchangers (SHE’s) have superior heat transfer characteristics, and are less susceptible to fouling. Further, heat losses to surroundings are minimized because of its compact configuration. SHEs have never been examined for phase-change heat transfer applications. The research presented provides useful information for thermosyphon design and Spiral Heat Exchanger.
楚双霞; 刘林华
2011-01-01
There are five typical approximate formulae of maximum conversion efficiency, which are often used for the second law analysis of the utilization of terrestrial solar radiation. Based on Candau's definition of radiative exergy and solar spectral radiation databank developed by Gueymard, the maximum conversion efficiencies (exergy-to-energy ratio) of terrestrial solar radiation under different air mass and tilt angle were obtained and taken as benchmark solution. The accuracies of these five typical approximate formulae of maximum conversion efficiency were compared and analyzed under different atmospheric condition and tilt angle. The results show that, for maximum conversion efficiency of terrestrial solar radiation, the approximate formulae that proposed by Petela, Spanner, Parrot and Jeter overestimates, while that proposed by Badescu underestimates largely. Atmospheric condition heavily affects maximum conversion efficiency of terrestrial solar radiation. The influence of atmospheric condition should be taken into account on the exact computation of maximum conversion efficiency of terrestrial solar radiation for the second law analysis of solar energy conversion systems.%在对应用地表太阳辐射的系统进行热力学第二定律分析时,经常采用5个典型的太阳辐射最大转化效率计算公式.在Candau给出的辐射(火用)的定义和Gueymard公布的太阳光谱辐射数据的基础上,该文首先获得了不同大气条件和接收面下地表太阳辐射的最大转化效率(火用)和能间比值),并将其作为基准数据,比较和分析了不同大气条件和接收面下由5个典型公式计算得到的地表太阳辐射最大转化效率的精度.结果表明由Petela、Spanner、Parrot和Jeter提出的公式的计算结果高估了地表太阳辐射的最大转化效率,而由Badescu提出的公式计算得到的结果远远低估了地表太阳辐射的最大转化效率.大气条件对地表太阳辐射最大转化效率
Maximum energy output of a DFIG wind turbine using an improved MPPT-curve method
Dinh-Chung Phan; Shigeru Yamamoto
2015-01-01
A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG) wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based o...
Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J
2015-06-01
Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Hernández-Salcedo, P.G.; Amézaga-Madrid, P., E-mail: patricia.amezaga@cimav.edu.mx; Monárrez-Cordero, B.E.; Antúnez-Flores, W.; Pizá-Ruiz, P.; Leyva-Porras, C.; Ornelas-Gutiérrez, C.; Miki-Yoshida, M.
2015-09-15
The development and optimization of methodologies to generate magnetite nanoparticles is currently an innovation topic. For a desired application such as arsenic removal from waste water, the generation of these nanostructures with specific microstructural properties is determinant. Therefore, it is necessary to understand the phenomenon during the nanoparticles formation process. Thus, in this work it is reported the influence of synthesis parameters of AACVD technique on the formation of magnetite nanoparticles. Parameters were according to: (1) synthesis temperature, (2) tubular reactor diameter, (3) concentration of the precursor solution and type of solvent, (4) carrier gas flow and (5) solvent type in the collection process. The effect of these synthesis parameters on the morphology, size and microstructure are discussed in detail and related with the mechanism of formation of the particles. Theoretical simulations were performed on two of these parameters (1 and 4). The microstructure and surface morphology of the different nanostructures obtained were characterized by field emission scanning electron and transmission electron microscopy. Subsequently two materials, were selected for further microstructural analysis. Finally, to determine the removal efficiency in the two materials the arsenic adsorption was evaluated. A major contribution of this work was the calculation of the number of spherical particles formed from a single drop of precursor solution. This calculation matched with the value found experimentally.
Consensus theoretic classification methods
Benediktsson, Jon A.; Swain, Philip H.
1992-01-01
Consensus theory is adopted as a means of classifying geographic data from multiple sources. The foundations and usefulness of different consensus theoretic methods are discussed in conjunction with pattern recognition. Weight selections for different data sources are considered and modeling of non-Gaussian data is investigated. The application of consensus theory in pattern recognition is tested on two data sets: 1) multisource remote sensing and geographic data and 2) very-high-dimensional remote sensing data. The results obtained using consensus theoretic methods are found to compare favorably with those obtained using well-known pattern recognition methods. The consensus theoretic methods can be applied in cases where the Gaussian maximum likelihood method cannot. Also, the consensus theoretic methods are computationally less demanding than the Gaussian maximum likelihood method and provide a means for weighting data sources differently.
Marocico, Cristian A.; Zhang, Xia; Bradley, A. Louise, E-mail: bradlel@tcd.ie [Semiconductor Photonics Group, School of Physics and CRANN, Trinity College Dublin, College Green, Dublin 2 (Ireland)
2016-01-14
We present in this contribution a comprehensive investigation of the effect of the size of gold nanospheres on the decay and energy transfer rates of quantum systems placed close to these nanospheres. These phenomena have been investigated before, theoretically and experimentally, but no comprehensive study of the influence of the nanoparticle size on important dependences of the decay and energy transfer rates, such as the dependence on the donor-acceptor spectral overlap and the relative positions of the donor, acceptor, and nanoparticle, exists. As such, different accounts of the energy transfer mechanism have been presented in the literature. We perform an investigation of the energy transfer mechanisms between emitters and gold nanospheres and between donor-acceptor pairs in the presence of the gold nanospheres using a Green’s tensor formalism, experimentally verified in our lab. We find that the energy transfer rate to small nanospheres is greatly enhanced, leading to a strong quenching of the emission of the emitter. When the nanosphere size is increased, it acts as an antenna, increasing the emission of the emitter. We also investigate the emission wavelength and intrinsic quantum yield dependence of the energy transfer to the nanosphere. As evidenced from the literature, the energy transfer process between the quantum system and the nanosphere can have a complicated distance dependence, with a r{sup −6} regime, characteristic of the Förster energy transfer mechanism, but also exhibiting other distance dependences. In the case of a donor-acceptor pair of quantum systems in the presence of a gold nanosphere, when the donor couples strongly to the nanosphere, acting as an enhanced dipole; the donor-acceptor energy transfer rate then follows a Förster trend, with an increased Förster radius. The coupling of the acceptor to the nanosphere has a different distance dependence. The angular dependence of the energy transfer efficiency between donor and
Marocico, Cristian A; Zhang, Xia; Bradley, A Louise
2016-01-14
We present in this contribution a comprehensive investigation of the effect of the size of gold nanospheres on the decay and energy transfer rates of quantum systems placed close to these nanospheres. These phenomena have been investigated before, theoretically and experimentally, but no comprehensive study of the influence of the nanoparticle size on important dependences of the decay and energy transfer rates, such as the dependence on the donor-acceptor spectral overlap and the relative positions of the donor, acceptor, and nanoparticle, exists. As such, different accounts of the energy transfer mechanism have been presented in the literature. We perform an investigation of the energy transfer mechanisms between emitters and gold nanospheres and between donor-acceptor pairs in the presence of the gold nanospheres using a Green's tensor formalism, experimentally verified in our lab. We find that the energy transfer rate to small nanospheres is greatly enhanced, leading to a strong quenching of the emission of the emitter. When the nanosphere size is increased, it acts as an antenna, increasing the emission of the emitter. We also investigate the emission wavelength and intrinsic quantum yield dependence of the energy transfer to the nanosphere. As evidenced from the literature, the energy transfer process between the quantum system and the nanosphere can have a complicated distance dependence, with a r(-6) regime, characteristic of the Förster energy transfer mechanism, but also exhibiting other distance dependences. In the case of a donor-acceptor pair of quantum systems in the presence of a gold nanosphere, when the donor couples strongly to the nanosphere, acting as an enhanced dipole; the donor-acceptor energy transfer rate then follows a Förster trend, with an increased Förster radius. The coupling of the acceptor to the nanosphere has a different distance dependence. The angular dependence of the energy transfer efficiency between donor and acceptor
Namuangruk, Supawadee; Sirithip, Kanokkorn; Rattanatwan, Rattanawelee; Keawin, Tinnagon; Kungwan, Nawee; Sudyodsuk, Taweesak; Promarak, Vinich; Surakhot, Yaowarat; Jungsuttiwong, Siriporn
2014-06-28
The charge transfer effect of different meso-substituted linkages on porphyrin analogue 1 (A1, B1 and C1) was theoretically investigated using density functional theory (DFT) and time-dependent DFT (TDDFT) calculations. The calculated geometry parameters and natural bond orbital analysis reveal that the twisted conformation between porphyrin macrocycle and meso-substituted linkages leads to blocking of the conjugation of the conjugated backbone, and the frontier molecular orbital plot shows that the intramolecular charge transfer of A1, B1 and C1 hardly takes place. In an attempt to improve the photoinduced intramolecular charge transfer ability of the meso-linked zinc porphyrin sensitizer, a strong electron-withdrawing group (CN) was introduced into the anchoring group of analogue 1 forming analogue 2 (A2, B2 and C2). The density difference plot of A2, B2 and C2 shows that the charge transfer properties dramatically improved. The electron injection process has been performed using TDDFT; the direct charge-transfer transition in the A2-(TiO2)38 interacting system takes place; our results strongly indicated that introducing electron-withdrawing groups into the acceptor part of porphyrin dyes can fine-tune the effective conjugation length of the π-spacer and improve intramolecular charge transfer properties, consequently inducing the electron injection process from the anchoring group of the porphyrin dye to the (TiO2)38 surface which may improve the conversion efficiency of the DSSCs. Our calculated results can provide valuable information and a promising outlook for computation-aided sensitizer design with anticipated good properties in further experimental synthesis.
Basak, Tanmay [Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai 600036 (India)], E-mail: tanmay@iitm.ac.in
2008-02-21
A theoretical analysis has been carried out to analyse the efficient heating process of long rectangular samples with various orientations of square cross sections in the presence of lateral and radial irradiation. Lateral irradiation represents the sample incident at one direction with the source at infinity whereas radial irradiation represents the situation where the sample is incident with microwave radiation from the coaxial cylindrical cavity at infinity. Electric field equations have been solved with a hypothetical circular domain which surrounds the square cross sections and facilitates the solution of field equations with the radiation boundary condition. The electric field and temperature have been solved using the finite element method for the composite domain. Generalized characteristics on power absorption and temperature distribution as functions of the wave number (N{sub w}) and the penetration number (N{sub p}) have been obtained. Radial irradiation gives a larger power absorption for N{sub w} {<=} 0.56 and either lateral or radial irradiation is favoured for N{sub w} {>=} 0.56 based on various N{sub p} values. The aligned square cross section is found to give larger heating rates in the presence of dominant lateral irradiation. The detailed spatial distributions of power and temperature are extensively studied and the suitability of either radial or lateral irradiation for a specific cross section has been recommended. The large heating rate as well as minimal thermal runaway become the competing factors for the selection of a specific heating strategy. The case studies are demonstrated for high and low lossy substances (beef and bread)
Sefkow, Adam B.; Bennett, Guy R.
2010-09-01
Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: as the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
张玖霞; 方杰
2011-01-01
In this paper,Meihekou scale intensive arable land to achieve good results as the starting point,the transfer of land from the government guidance to promote,develop policies to create conditions for the scale,speed up the transfer of rural labor to expand the scale of operation in space in the analysis of Meihekou scale intensive arable land on the remarkable results.Meanwhile,for the land transfer process Meihekou exist in many non-standard issues,from land to carry out intensive,in order to achieve maximum efficiency of land use perspective,on how to do large-scale land operation Meihekou proposed measures.%本文以梅河口市做好耕地集约规模经营取得的成效为切入点,从政府引导推动土地流转,制定优惠扶持政策为规模经营创造条件,加快农村劳动力转移为规模经营拓展空间等方面分析了梅河口市在耕地集约规模经营上取得的显著成效。同时,针对梅河口市在土地流转过程中存在的问题,从实现土地使用效益最大化的视角,对梅河口市如何做好土地规模经营提出了相关的对策。
Sharma, Pankaz K; De Visser, Sam P; Ogliaro, François; Shaik, Sason
2003-02-26
High-valent metal-oxo complexes catalyze C-H bond activation by oxygen insertion, with an efficiency that depends on the identity of the transition metal and its oxidation state. Our study uses density functional calculations and theoretical analysis to derive fundamental factors of catalytic activity, by comparison of a ruthenium-oxo catalyst with its iron-oxo analogue toward methane hydroxylation. The study focuses on the ruthenium analogue of the active species of the enzyme cytochrome P450, which is known to be among the most potent catalysts for C-H activation. The computed reaction pathways reveal one high-spin (HS) and two low-spin (LS) mechanisms, all nascent from the low-lying states of the ruthenium-oxo catalyst (Ogliaro, F.; de Visser, S. P.; Groves, J. T.; Shaik, S. Angew. Chem. Int. Ed. 2001, 40, 2874-2878). These mechanisms involve a bond activation phase, in which the transition states (TS's) appear as hydrogen abstraction species, followed by a C-O bond making phase, through a rebound of the methyl radical on the metal-hydroxo complex. However, while the HS mechanism has a significant rebound barrier, and hence a long lifetime of the radical intermediate, by contrast, the LS ones are effectively concerted with small barriers to rebound, if at all. Unlike the iron catalyst, the hydroxylation reaction for the ruthenium analogue is expected to follow largely a single-state reactivity on the LS surface, due to a very large rebound barrier of the HS process and to the more efficient spin crossover expected for ruthenium. As such, ruthenium-oxo catalysts (Groves, J. T.; Shalyaev, K.; Lee, J. In The Porphyrin Handbook; Biochemistry and Binding: Activation of Small Molecules, Vol. 4; Kadish, K. M., Smith, K. M., Guilard, R., Eds.; Academic Press: New York, 2000; pp 17-40) are expected to lead to more stereoselective hydroxylations compared with the corresponding iron-oxo reactions. It is reasoned that the ruthenium-oxo catalyst should have larger turnover
Maximum Work of Free-Piston Stirling Engine Generators
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Bruus, Henrik
in complexity, a proper theoretical understanding becomes increasingly important. The basic idea of the book is to provide a self-contained formulation of the theoretical framework of microfluidics, and at the same time give physical motivation and examples from lab-on-a-chip technology. After three chapters...
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Fuel Application Efficiency in Ideal Cycle of Gas Turbine Plant with Isobaric Heat Supply
A. Nesenchuk
2013-01-01
Full Text Available The paper reveals expediency to use in prospect fuels with maximum value Qнр∑Vi and minimum theoretical burning temperature in order to obtain maximum efficiency of the ideal cycle in GTP with isobaric heat supply.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
李姗姗; 熊超; 吴亦农; 党海政
2011-01-01
为了提高低温制冷机整机效率,基于力的平衡及电压平衡方程理论,分析了线性压缩机电机效率及电功转化为活塞表面声功效率的影响因素;并对线性压缩机效率进行了实验测量;理论与实验吻合较好;电机效率、电功转化为声功效率测量值与理论值偏差分别在3%及7%以内.最后对冷指确定时线性压缩机的优化设计方法进行了总结.%In order to increase the total efficiency of the Stirling cryocooler and Stirling - type pulse tube cryocooler, theoretical analysis was carried out on the efficiency of the motor and transduction efficiency of electrical power to acoustic power at the piston surface of the linear compressor based on the force balance and voltage balance equation. The two efficiencies of two different linear compressors connected with cold fingers were measured. The calculated results agreed well with the test values, and the differences between the experimental results and the theoretical values were respectively 3％ for the efficiency of the motor and 7％ for the transduction efficiency of electrical power to acoustic power at the piston surface of the linear compressor. At last the optimal design method of the linear compressor with known load impedance was summarized.
Qingwen, Deng; Xiaoliang, Wang; Hongling, Xiao; Zeyu, Ma; Xiaobin, Zhang; Qifeng, Hou; Jinmin, Li; Zhanguo, Wang
2010-10-01
A solar cell with a novel structure is investigated by means of the analysis of microelectronic and photonic structure (AMPS). The power conversion efficiency is investigated with the variations in interface recombination velocity, thicknesses of p-type layer, intrinsic layer, n-type layer, and doping density. Results show that it is available and preferable in theory to employ a-SiC:H as a window layer in p-a-SiC:H/i-a-Si:H/n-μc-Si solar cells, and provide a new approach to improving the power conversion efficiency of amorphous silicon solar cells.
Akimov, Alexey V; Jinnouchi, R; Shirai, S; Asahi, R; Prezhdo, Oleg V
2015-06-18
We present a computational study of the dynamical and electronic structure origins of the impact of anchoring groups, PO3H2, COOH, and OH, on the efficiency of photochemical CO2 reduction in Ru(di-X-bpy)(CO)2Cl2/Ta2O5 systems. Recent experimental studies indicate that the efficiency may not directly correlate with the driving force for electron transfer (ET) in these systems, prompting the need for further investigation of the role of anchor groups. Our analysis shows that there are at least two key roles of the anchor in determining the efficiency of CO2 reduction by the Ru complex. First, depending on local steric interactions, different tilting angles and their fluctuations may emerge for different anchors, affecting the magnitude of the donor-acceptor coupling. Second, depending on localization of acceptor states on the anchor, determined by the anchor's tendency to form conjugate subsystems, the yields of ET to the catalytic center may vary, directly affecting the photocatalytic efficiency. Finally, our calculations indicate that surface modeling with N-doping and many-body effects are needed to describe the ET process in the systems properly. N-doping imparts the Ta2O5 surface with a dipole moment, while Coulomb and exchange contributions to the electron-hole interaction can produce excitons that should be taken into account.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Atena Naeimi; Samira Saeednia; Mehdi Yoosefian; Hadi Amiri Rudbari; Viviana Mollica Nardo
2015-07-01
An environmentally friendly protocol is described for an economic, practical laboratory-scale oxidation of primary and secondary alcohols to aldehydes and ketones, using a bis-chloro-bridged binuclear Cu(II) complex [(HL)Cu(2-Cl)2Cu(HL)]*1.5 CH3OH as catalyst. The catalyst was prepared in situ from commercially available reagents and is characterized by single crystal X-ray analysis, FT-IR, UV-visible spectra, mass spectrometry, and powder x-ray diffraction (PXRD). The geometry of the complex has been optimized using the B3LYP level of theory confirming the experimental data. Our results demonstrated well the efficiency, selectivity and stability of this new catalyst in the oxidation of alcohols in ethanol and tert-butyl hydroperoxide (tBuOOH) as a green solvent and oxidant, respectively. Turnover number and reusability have proven the high efficiency and relative stability of the catalyst.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
David, Aurelien; Hurni, Christophe A.; Young, Nathan G.; Craven, Michael D.
2016-08-01
The current-voltage characteristic and ideality factor of III-Nitride quantum well light-emitting diodes (LEDs) grown on bulk GaN substrates are investigated. At operating temperature, these electrical properties exhibit a simple behavior. A model in which only active-region recombinations have a contribution to the LED current is found to account for experimental results. The limit of LED electrical efficiency is discussed based on the model and on thermodynamic arguments, and implications for electroluminescent cooling are examined.
Moiseyev, V. A.; Nazarov, V. P.; Zhuravlev, V. Y.; Zhuykov, D. A.; Kubrikov, M. V.; Klokotov, Y. N.
2016-12-01
The development of new technological equipment for the implementation of highly effective methods of recovering highly viscous oil from deep reservoirs is an important scientific and technical challenge. Thermal recovery methods are promising approaches to solving the problem. It is necessary to carry out theoretical and experimental research aimed at developing oil-well tubing (OWT) with composite heatinsulating coatings on the basis of basalt and glass fibers. We used the method of finite element analysis in Nastran software, which implements complex scientific and engineering calculations, including the calculation of the stress-strain state of mechanical systems, the solution of problems of heat transfer, the study of nonlinear static, the dynamic transient analysis of frequency characteristics, etc. As a result, we obtained a mathematical model of thermal conductivity which describes the steady-state temperature and changes in the fibrous highly porous material with the heat loss by Stefan-Boltzmann's radiation. It has been performed for the first time using the method of computer modeling in Nastran software environments. The results give grounds for further implementation of the real design of the OWT when implementing thermal methods for increasing the rates of oil production and mitigating environmental impacts.
Li, Jieqiong [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Wang, Li, E-mail: chemwangl@henu.edu.cn [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Wang, Xin [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); He, Chaozheng, E-mail: hecz2013@nynu.edu.cn [College of Physics and Electronic Engineering, Nanyang Normal University, Nanyang 473061 (China); Zhang, Jinglai, E-mail: zhangjinglai@henu.edu.cn [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China)
2015-08-01
The phosphorescent properties of three synthesized and three new designed platinum(II) complexes are focused on in this work. To reveal their structure–property relationships, a density functional theory/time-dependent density functional theory (DFT/TDDFT) investigation is performed on the geometric and electronic structures, absorption and emission spectra. The electroluminescent (EL) properties are evaluated by the ionization potential (IP), electron affinity (EA), and reorganization energy (λ). Furthermore, the radiative rate constant (k{sub r}) is qualitatively elucidated by various factors including the strength of the SOC interaction between the higher-lying singlet excited states (S{sub n}) and the T{sub 1} state, the oscillator strength (f) of the S{sub n} states that can couple with the T{sub 1} state, and the energy separation between the coupled states. A combined analysis of various elements that could affect the phosphorescent efficiency is beneficial to exploring efficient triplet phosphors in OLEDs. Consequently, complexes Pt-1 and 1 would be more suitable blue-emitting phosphorescent materials with balance of EL properties and acceptable quantum yields. - Graphical abstract: Display Omitted - Highlights: • The absorption and phosphorescence spectra of Pt(II) complexes are investigated. • Their Φ{sub em}, IP, EA, and reorganization energy are compared. • Three new Pt(II) complexes are designed.
Janjua, Muhammad Ramzan Saeed Ashraf
2012-11-05
This work was inspired by a previous report (Janjua et al. J. Phys. Chem. A 2009, 113, 3576-3587) in which the nonlinear-optical (NLO) response strikingly improved with an increase in the conjugation path of the ligand and the nature of hexamolybdates (polyoxometalates, POMs) was changed into a donor by altering the direction of charge transfer with a second aromatic ring. Herein, the first theoretical framework of POM-based heteroaromatic rings is found to be another class of excellent NLO materials having double heteroaromatic rings. First hyperpolarizabilities of a large number of push-pull-substituted conjugated systems with heteroaromatic rings have been calculated. The β components were computed at the density functional theory (DFT) level (BP86 geometry optimizations and LB94 time-dependent DFT). The largest β values are obtained with a donor (hexamolybdates) on the benzene ring and an acceptor (-NO(2)) on pyrrole, thiophene, and furan rings. The pyrrole imido-substituted hexamolybdate (system 1c) has a considerably large first hyperpolarizability, 339.00 × 10(-30) esu, and it is larger than that of (arylimido)hexamolybdate, calculated as 0.302 × 10(-30) esu (reference system 1), because of the double aromatic rings in the heteroaromatic imido-substituted hexamolybdates. The heteroaromatic rings act as a conjugation bridge between the electron acceptor (-NO(2)) and donor (polyanion). The introduction of an electron donor into heteroaromatic rings significantly enhances the first hyperpolarizabilities because the electron-donating ability is substantially enhanced when the electron donor is attached to the heterocyclic aromatic rings. Interposing five-membered auxiliary fragments between strong donor (polyanion) or acceptor (-NO(2)) groups results in a large computed second-order NLO response. The present investigation provides important insight into the NLO properties of (heteroaromatic) imido-substituted hexamolybdate derivatives because these compounds
Starodub, Nickolaj F.; Slyshyk, Nelya F.; Shavanova, Kateryna E.; Karpyuk, Andrij; Mel'nichenko, Mykola M.; Zherdev, Anatolij V.; Dzantiev, Boris B.
2014-10-01
It is presented the experimental results about the investigations of the efficiency of the structured nano-pourous silicon (sNPS) application as transducer in the immune biosensors designed for the control of retroviral bovine leucosis (RBL) and the determination of the level such mycotoxins as T2 and patulin among environmental objects. Today, there is an arsenal of the traditional immunological methods that allow for the biochemical diagnostics of the above diseases and control of toxins but they are deeply routine and can not provide the requirements of practice for express analysis, its low cost and simplicity. Early to provide practical demands we developed immune biosensors based on SPR, TIRE and thermistors. To find more simple variant of the assay we studied the efficiency sNPS as trasducer in immune biosensor. The registration of the specific signals was made by measuremets of level of chemiluminescence (ChL) or photocurrent. The sensitivity of biosensor for both variants of the specific signal registration at the determination of T2 and patulin was about 10-20 ng/ml. Sensitivity analysis of RBL by this immune biosensors exceeds traditionally used approaches including the ELISA-method too. The optimal serum dilution of blood at the screening leukemia should be no less than 1:100, or even 1:500. The immune biosensor may be applied too for express screening leucosis through analysis of milk. In this case the optimal serum dilution of milk should be about 1:20. The total time of analysis including all steps (immobilization of specific Ab or antigens on the transducer surface and measurements) was about 40 min and it may be a sharp decline if the above mentione sensitive elements will be immobilized preliminary measurements. It is concluded that the proposed type of transducer for immune biosensor is effective for analysis of mycotoxins in screening regime.
Marc Vanderhaeghen
2007-04-01
The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.
Wang, Lijuan; Xu, Bin; Zhang, Jibo; Dong, Yujie; Wen, Shanpeng; Zhang, Houyu; Tian, Wenjing
2013-02-21
The electronic structure and charge transport property of 9,10-distyrylanthracene (DSA) and its derivatives with high solid-state luminescent efficiency were investigated by using density functional theory (DFT). The impact of substituents on the optimized structure, reorganization energy, ionization potential (IP) and electronic affinity (EA), frontier orbitals, crystal packing, transfer integrals and charge mobility were explored based on Marcus theory. It was found that the hole mobility of DSA was 0.21 cm(2) V(-1) s(-1) while the electron mobility was 0.026 cm(2) V(-1) s(-1), which were relatively high due to the low reorganization energies and high transfer integrals. The calculated results showed that the charge transport property of these compounds can be significantly tuned via introducing different substituents to DSA. When one electron-withdrawing group (cyano group) was introduced into DSA, DSA-CN exhibited hole mobility of 0.14 cm(2) V(-1) s(-1) which was on the same order of that of DSA. However, the electron mobility of DSA-CN decreased to 8.14 × 10(-4) cm(2) V(-1) s(-1) due to the relatively large reorganization energy and disadvantageous transfer integral. The effect of electron-donating substituents was investigated by introducing methoxy group and tertiary butyl into DSA. DSA-OCH(3) and DSA-TBU showed much lower charge mobility than DSA resulting from the steric hindrance of substituents. On the other hand, both of them exhibited balanced transport properties (for DSA-OCH(3), the hole and electron mobility was 0.0026 and 0.0027 cm(2) V(-1) s(-1); for DSA-TBU, the hole and electron mobility was 0.045 and 0.012 cm(2) V(-1) s(-1)) because of their similar transfer integrals for both hole and electron. DSA and its derivatives were supposed to be one of the most excellent emissive materials for organic electroluminescent applications because of their high charge mobility and high solid-state luminescent efficiency.
张艳超; 何济洲
2014-01-01
在低耗散卡诺热机模型的基础上，进一步研究热漏对低耗散卡诺热机最大功率下效率及其边界的影响。在类卡诺热机循环条件下，考虑等温膨胀与等温压缩过程中高低温热源之间存在热漏，推导出存在热漏时低耗散卡诺热机最大功率下效率的表达式，并且在对称情况下与经典CA(Curzon-Ahlborn)效率进行比较。发现当不存在热漏时，低耗散卡诺热机最大功率下的效率等于CA效率。当存在热漏时，低耗散卡诺热机最大功率下的效率低于CA效率，并随着热漏的增加而降低。在非对称下得到存在热漏时低耗散卡诺热机最大功率下效率的上下限和可观测范围，并与不同种类实际的热机效率进行比较，结果表明考虑热漏时低耗散卡诺热机的效率及其边界更加符合实际热机的观测值。%Based on the low-dissipation Carnot heat engine model, the influence of heat leak on the efficiency at maximum power and its bounds of low-dissipation Carnot heat engine are further discussed. Under the condition of Carnot-like heat engine cycle, the expressions for the efficiency at maximum power of the quantum dot engine are derived in the presence of heat leak between hot reservoir and cold reservoir of the isothermal expansion and the isothermal compression process, and compared with the classical CA efficiency in the symmetric case. It is found that, when there is no heat leak, the efficiency at maximum power of the low-dissipation Carnot heat engine is equal to the CA efficiency. In the presence of heat leak, the efficiency at maximum power of the low-dissipation Carnot heat engine is lower than the CA efficiency, and decreases with the increases of heat leak. In the case of asymmetric, the upper bound and lower bound of efficiency at maximum power are obtained, and compared with different kinds of actual engine efficiency. The results show that the efficiency at maximum power and its
Manfred PRETIS
2012-03-01
Full Text Available Early Childhood Intervention (ECI for vulnerable children between the age of 0-3 and 6 can be seen as well established preventive service in Europe. Even though recent epidemiologic data indicate higher rates of vulnerability during childhood and adolescence, traditionally up to 6% of the children are eligible for the ECI treatment. Definitions describing the ECI include from stable or ad hoc trans-disciplinary teams helping the child, to specific professional profiles. There is a scientific consensus regarding the effects of the ECI upon the child’s development and the family dynamics. The ECI itself is responsible for more stable impact on the socio-emotional development of the child and the parent-child relationship. Specific focus in the research is given to the role of the parents as primary caregivers. Based on the importance of enhancing the interactions between the parents and the children, this paper discusses the strategies that help increase the efficiency of the ECI trough parental involvement. Special attention is dedicated to the mutual understanding, transparency and the use of common language such as the ICF.
Marchal, J [Diamond Light Source Ltd, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)], E-mail: julien.marchal@diamond.ac.uk
2010-01-15
A detector cascaded model is proposed to describe charge-sharing effect in single-photon counting segmented silicon detectors. Linear system theory is applied to this cascaded model in order to derive detector performance parameters such as large-area gain, presampling Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE) as a function of energy detection threshold. This theory is used to model one-dimensional detectors (i.e. strip detectors) where X-ray-generated charge can be shared between two sampling elements, but the concepts developed in this article can be generalized to two-dimensional arrays of detecting elements (i.e. pixels detectors). The zero-frequency DQE derived from this model is consistent with expressions reported in the literature using a different method. The ability of this model to simulate the effect of charge sharing on image quality in the spatial frequency domain is demonstrated by applying it to a hypothetical one-dimensional single-photon counting detector illuminated with a typical mammography spectrum.
Saha, Sourav Kr; Dutta, Alokdut; Ghosh, Pritam; Sukul, Dipankar; Banerjee, Priyabrata
2016-07-21
In order to evaluate the effect of the functional group present in the ligand backbone towards corrosion inhibition performances, three Schiff-base molecules namely, (E)-4-((2-(2,4-dinitrophenyl)hydrazono)methyl)pyridine (L(1)), (E)-4-(2-(pyridin-4-ylmethylene)hydrazinyl)benzonitrile (L(2)) and (E)-4-((2-(2,4-dinitrophenyl)hydrazono)methyl)phenol (L(3)) were synthesized and used as corrosion inhibitors on mild steel in 1 M HCl medium. The corrosion inhibition effectiveness of the studied inhibitors was investigated by weight loss and several sophisticated analytical tools such as potentiodynamic polarization and electrochemical impedance spectroscopy measurements. Experimentally obtained results revealed that corrosion inhibition efficiencies followed the sequence: L(3) > L(1) > L(2). Electrochemical findings showed that inhibitors impart high resistance towards charge transfer across the metal-electrolyte interface and behaved as mixed type inhibitors. Scanning electron microscopy (SEM) was also employed to examine the protective film formed on the mild steel surface. The adsorption as well as inhibition ability of the inhibitor molecules on the mild steel surface was investigated by quantum chemical calculation and molecular dynamic (MD) simulation. In quantum chemical calculations, geometry optimized structures of the Schiff-base inhibitors, electron density distribution in HOMO and LUMO and Fukui indices of each atom were employed for their possible mode of interaction with the mild steel surfaces. MD simulations revealed that all the inhibitors molecules adsorbed in parallel orientation with respect to the Fe(110) surface.
Wang, Yi; Antonuk, Larry E.; El-Mohri, Youcef; Sawant, Amit; Zhao, Qihua; Du, Hong; Li, Yixin
2006-03-01
Megavoltage cone-beam computed tomography (CBCT) using active matrix flat-panel imagers (AMFPIs) is a promising candidate for providing image guidance in radiation therapy. Unfortunately, the practical clinical implementation of this technique is limited by the relatively low detective quantum efficiency (DQE) of conventional megavoltage AMFPIs. This limitation is due to the modest thickness of the phosphor screen employed to convert incident x-rays to optical photons and the trade-off that exists between phosphor thickness and spatial resolution. Recently, our group has begun pursuing the development of thick crystalline segmented scintillating detectors as x-ray converters for AMFPIs so as to circumvent this limitation. In order to examine the potential of such detectors for providing soft-tissue visualization by means of CBCT at megavoltage energies, a Monte Carlo-based method was used to simulate the acquisition of projection images of a contrast phantom. These images were used to perform CT reconstructions by means of a Feldkamp-based algorithm. In this study, various detector configurations involving CsI and BGO scintillators at thicknesses of 10 mm and 40 mm were evaluated. In addition, since the simulations only considered energy deposition, and did not include optical phenomena, both segmented and non-segmented (continuous) detector configurations were evaluated. For the segmented CsI detectors, septal wall materials with densities lower, equivalent and higher than that of the scintillator were considered. Performance was quantified in terms of the contrast-to-noise ratio obtained for lowcontrast, soft-tissue-equivalent objects (i.e., liver, brain, and breast) embedded in the phantom. The results obtained from these early studies suggest that such segmented converters can provide visualization of soft-tissue contrast in tomographic images at clinically practical doses. It is anticipated that the realization of optimized segmented detector designs will lead
Mikeš, Daniel
2010-05-01
Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same
Joos, Georg
1986-01-01
Among the finest, most comprehensive treatments of theoretical physics ever written, this classic volume comprises a superb introduction to the main branches of the discipline and offers solid grounding for further research in a variety of fields. Students will find no better one-volume coverage of so many essential topics; moreover, since its first publication, the book has been substantially revised and updated with additional material on Bessel functions, spherical harmonics, superconductivity, elastomers, and other subjects.The first four chapters review mathematical topics needed by theo
Li, Youyong; Lin, Shiang-Tai; Goddard, William A
2004-02-18
preferred lattice is not determined during the packing process. Both enthalpy and entropy decrease as the density increases. Free energy change with volume shows two stable phases: the condensed phase and the isolated micelle phase. The interactions between the soft dendrimer balls are found to be lattice dependent when described by a two-body potential because the soft ball self-adjusts its shape and interaction in different lattices. The shape of the free energy potential is similar to that of the "square shoulder potential". A model explaining the packing efficiency of ideal soft balls in various lattices is proposed in terms of geometrical consideration.
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Theoretical Delay Time Distributions
Nelemans, Gijs; Bours, Madelon
2012-01-01
We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of type Ia supernova progenitors. We also compare the results of the different research groups and conclude that although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this cannot explain all the differences.
Theoretical Delay Time Distributions
Nelemans, Gijs; Toonen, Silvia; Bours, Madelon
2013-01-01
We briefly discuss the method of population synthesis to calculate theoretical delay time distributions of Type Ia supernova progenitors. We also compare the results of different research groups and conclude that, although one of the main differences in the results for single degenerate progenitors is the retention efficiency with which accreted hydrogen is added to the white dwarf core, this alone cannot explain all the differences.
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Theoretical Physics 1. Theoretical Mechanics
Dreizler, Reiner M.; Luedde, Cora S. [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik
2010-07-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. (orig.)
Barney G. Glaser, Ph.D., Hon. Ph.D.
2009-11-01
Full Text Available Theoretical sorting has brought the analyst to the point of pent-up pressure to write: to see the months of work actualized in a “piece.” But this is only a personal pressure. The goal of grounded theory methodology, above all is to offer the results to the public, usually through one or more publications. We will focus on writing for publication, which is the most frequent way that the analyst can tell how people are “buying” what really matters in sociology, or in other fields.Both feedback on and use of publications will be the best evaluation of the analyst’s grounded theory. It will be his main source or criticism, constructive critique, and frequently of career rewards. In any case, he has to write to expand his audience beyond the limited number of close colleagues and students. Unless there is a publication, his work will be relegated to limited discussion, classroom presentation, or even private fantasy. The rigor and value of grounded theory work deserves publication. And many analysts have a stake in effecting wider publics, which makes their substantive grounded theory count.
Borkowski Andrzej
2015-12-01
Full Text Available The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+ that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis.
Borkowski, Andrzej; Kosek, Wiesław
2015-12-01
The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011-2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-Msplit estimation, which allows to estimate a vector of parameter differences and the Shift-Msplit(+) that is a generalisation of Shift-Msplit estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis. New algorithms based on the wavelet, Fourier and Hilbert transforms were applied to find time-frequency characteristics of geodetic and geophysical time series as well as time-frequency relations between them. Statistical properties of these time series are also presented using different statistical tests as well as 2nd, 3rd and 4th moments about the mean. The new forecasts methods are presented which enable prediction of the considered time series in different frequency bands.
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
Optimal Tuning of Amplitude Proportional Coulomb Friction Damper for Maximum Cable Damping
Weber, Felix; Høgsberg, Jan Becker; Krenk, Steen
2010-01-01
This paper investigates numerically the optimal tuning of Coulomb friction dampers on cables, where the optimality criterion is maximum additional damping in the first vibration mode. The expression for the optimal friction force level of Coulomb friction dampers follows from the linear viscous...... damper via harmonic averaging. It turns out that the friction force level has to be adjusted in proportion to cable amplitude at damper position which is realized by amplitude feedback in real time. The performance of this adaptive damper is assessed by simulated free decay curves from which the damping...... is estimated. It is found that the damping efficiency agrees well with the expected value at the theoretical optimum. However, maximum damping is larger and achieved at a force to amplitude ratio of 1.4 times the analytical value. Investigations show that the increased damping results from energy spillover...
Theoretical Mechanics Theoretical Physics 1
Dreizler, Reiner M
2011-01-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. - A collection of 74 problems with detailed step-by-step guidance towards the solutions. - A col...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Efficient estimation of the maximum metabolic productivity of batch systems
St. John, Peter C.; Crowley, Michael F.; Bomble, Yannick J.
2017-01-31
Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. Previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable.
Dithering Digital Ripple Correlation Control for Photovoltaic Maximum Power Point Tracking
Barth, C; Pilawa-Podgurski, RCN
2015-08-01
This study demonstrates a new method for rapid and precise maximum power point tracking in photovoltaic (PV) applications using dithered PWM control. Constraints imposed by efficiency, cost, and component size limit the available PWM resolution of a power converter, and may in turn limit the MPP tracking efficiency of the PV system. In these scenarios, PWM dithering can be used to improve average PWM resolution. In this study, we present a control technique that uses ripple correlation control (RCC) on the dithering ripple, thereby achieving simultaneous fast tracking speed and high tracking accuracy. Moreover, the proposed method solves some of the practical challenges that have to date limited the effectiveness of RCC in solar PV applications. We present a theoretical derivation of the principles behind dithering digital ripple correlation control, as well as experimental results that show excellent tracking speed and accuracy with basic hardware requirements.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
李勇汇; 冉兵; 朱海昱
2012-01-01
The maximum efficiency control scheme for a solid oxide fuel cell (SOFC) distributed generator (DG) in the grid-connected condition was proposed. By introducing the steady-state equations which govern the complex electrochemistry, thennodynamic and electrical processes of the SOFC DG, the relationship between the AC and DC sides of the SOFC DG was established. Analyses indicate that the control variables of the power conditioning unit are dependant of the control variables of the cell stack if the constant unity power factor operating scheme for the SOFC DG is chosen. However, the operating states of the SOFC DG under this control scheme must be subjected to the operating constraints denoted as feasible operating space (FOS). The non-linear programming method was then used to determine the maximum efficiency and the optimal control variables. Simulation results show that the SOFC DG under the maximum efficiency should maintain three DC-side operating variables constant simultaneously, namely, fuel utilization factor, excess oxygen ratio and stack operating temperature.%提出了一种固体氧化物燃料电池(solid oxide fuel cell,SOFC)分布式电源(distributed generator,DG)以最大效率并网发电的控制策略.通过引入反映内部复杂电化学、热力学和电气过程的稳态方程,建立了SOFC分布式电源交、直流两侧的联系.分析表明,SOFC分布式电源在采用恒功率因素运行方式时其功率调节单元的2个控制变量取决于电池堆的2个变量.然而,这种运行方式必须满足SOFC分布式电源的运行状态限制在被定义为合理运行空间(feasible operating space,FOS)的范围内.非线性规划方法用来计算SOFC分布式电源的最大效率和最优控制变量.仿真表明,SOFC分布式电源以最大效率发电时其直流侧氢气利用系数、过量氧气比例和电池堆温度这3个运行变量必须保持恒定.
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Energy-Efficient Transmission Schemes in Cooperative Cellular Systems
Yang, Wei; Wang, Ying; Sun, Wanlu
2010-01-01
Energy-efficient communication is an important requirement for mobile devices, as the battery technology has not kept up with the growing requirements stemming from ubiquitous multimedia applications. This paper considers energy-efficient transmission schemes in cooperative cellular systems with unbalanced traffic between uplink and downlink. Theoretically, we derive the optimal transmission data rate, which minimizes the total energy consumption of battery-powered terminals per information bit. The energy-efficient cooperation regions are then investigated to illustrate the effects of relay locations on the energy-efficiency of the systems, and the optimal relay location is found for maximum energy-efficiency. Finally, numerical results are provided to demonstrate the tradeoff between energy-efficiency and spectral efficiency.
Online Stochastic Ad Allocation: Efficiency and Fairness
Feldman, Jon; Korula, Nitish; Mirrokni, Vahab S; Stein, Cliff
2010-01-01
We study the efficiency and fairness of online stochastic display ad allocation algorithms from a theoretical and practical standpoint. In particular, we study the problem of maximizing efficiency in the presence of stochastic information. In this setting, each advertiser has a maximum demand for impressions of display ads that will arrive online. In our model, inspired by the concept of free disposal in economics, we assume that impressions that are given to an advertiser above her demand are given to her for free. Our main theoretical result is to present a training-based algorithm that achieves a (1-\\epsilon)-approximation guarantee in the random order stochastic model. In the corresponding online matching problem, we learn a dual variable for each advertiser, based on data obtained from a sample of impressions. We also discuss different fairness measures in online ad allocation, based on comparison to an ideal offline fair solution, and develop algorithms to compute "fair" allocations. We then discuss sev...
Quantum-dot Carnot engine at maximum power.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-04-01
We evaluate the efficiency at maximum power of a quantum-dot Carnot heat engine. The universal values of the coefficients at the linear and quadratic order in the temperature gradient are reproduced. Curzon-Ahlborn efficiency is recovered in the limit of weak dissipation.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Zolfigol, Mohammad Ali; Kiafar, Mahya; Yarie, Meysam; Taherpour, Avat(Arman); Fellowes, Thomas; Nicole Hancok, Amber; Yari, Ako
2017-06-01
Experimental and computational studies in the synthesis of 2-amino-4,6-diphenylnicotinonitrile using HBF4 as an oxidizing promoter catalyst under mild and solvent free conditions were carried out. The suggested anomeric based oxidation (ABO) mechanism is supported by experimental and theoretical evidence. The theoretical study shows that the intermediate isomers with 5R- and 5S- chiral positions have suitable structures for the aromatization through an anomeric based oxidation in the final step of the mechanistic pathway.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
STUDY ON MAXIMUM HYDROGEN CAPACITY FOR Zr-Ni AMORPHOUS ALLOY
无
2000-01-01
To design the amorphous hydrogen storage alloy efficiently, the maximum hydrogen capacities for Zr-Ni amorphous alloy were calculated. Based on the Rhomb Unit Structure Model(RUSM) for amorphous alloy and the experimental result that hydrogen atoms exist in 3Zr1Ni and 4Zr tetrahedron interstices in Zr-Ni amorphous alloy, the numbers of 3Zr-1Ni and 4Zr tetrahedron interstices in a RUSM were calculated which correspond to the hydrogen capacity. The two extremum Zr distribution states were calculated, such as highly heterogeneous Zr distribution and homogeneous Zr distribution. The calculated curves of hydrogen capacity with different Zr contents at two states indicate that the hydrogen capacity increases with increasing Zr content and reaches its maximum when Zr is 75%. The theoretical maximum hydrogen capacity for Zr-Ni amorphous alloy is 2.0(H/M). Meanwhile, the hydrogen capacity of heterogeneous Zr distribution alloy is higher than that of homogenous one at the same Zr content. The experimental results prove the calculated results reasonable, and accordingly, the experimental results that the distribution of Zr atom in amorphous alloy occur heterogeneous after a few hydrogen absorption-desorption cycles can be explained.
On the maximum grain size entrained by photoevaporative winds
Hutchison, Mark A; Maddison, Sarah T
2016-01-01
We model the behaviour of dust grains entrained by photoevaporation-driven winds from protoplanetary discs assuming a non-rotating, plane-parallel disc. We obtain an analytic expression for the maximum entrainable grain size in extreme-UV radiation-driven winds, which we demonstrate to be proportional to the mass loss rate of the disc. When compared with our hydrodynamic simulations, the model reproduces almost all of the wind properties for the gas and dust. In typical turbulent discs, the entrained grain sizes in the wind are smaller than the theoretical maximum everywhere but the inner disc due to dust settling.
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
Theoretical studies on the solar cell parameters of n-C/p-Si heterojunction
Gupta, B.; Shishodia, P.K.; Kapoor, A.; Mehra, R.M. [Department of Electronic Science, University of Delhi, South Campus, Benito Juarez Road, 110021 New Delhi (India); Krishna, K.M.; Umeno, M. [Research Center for Microstructure Devices, Nagoya Institute of Technology, 466 8555 Nagoya (Japan); Soga, T.; Jimbo, T. [Department of Environmental Technology and Urban Planning, Nagoya Institute of Technology, 466 8555 Nagoya (Japan)
2002-01-01
Amorphous carbon (a-C) is a potential material for the development of low cost solar cells. The heterojunction n-C/p-Si solar cell has been recently developed by Krishna et al. It has been shown that the maximum quantum efficiency (25%) appears at wavelength {lambda} (600 nm). In the present work, theoretical quantum efficiency has been calculated taking into account the contribution of hole photocurrent density, electron photocurrent density and the photocurrent within the depletion region. The variation of quantum efficiency with wavelength is found to be qualitatively similar to the experimentally observed variation. The solar cell parameters namely V{sub oc}, I{sub sc}, FF and efficiency have also been calculated and compared with the experimental values.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
Exploring the efficiency potential for an active magnetic regenerator
Eriksen, Dan; Engelbrecht, Kurt; Haffenden Bahl, Christian Robert
2016-01-01
A novel rotary state of the art active magnetic regenerator refrigeration prototype was used in an experimental investigation with special focus on efficiency. Based on an applied cooling load, measured shaft power, and pumping power applied to the active magnetic regenerator, a maximum second......-law efficiency of 18% was obtained at a cooling load of 81.5 W, resulting in a temperature span of 15.5 K and a coefficient of performance of 3.6. A loss analysis is given, based on measured pumping power and shaft power together with theoretically estimated regenerator presssure drop. It is shown that...
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Theoretical Study of One-Intermediate Band Quantum Dot Solar Cell
Abou El-Maaty Aly
2014-01-01
Full Text Available The intermediate bands (IBs between the valence and conduction bands play an important role in solar cells. Because the smaller energy photons than the bandgap energy can be used to promote charge carriers transfer to the conduction band and thereby the total output current increases while maintaining a large open circuit voltage. In this paper, the influence of the new band on the power conversion efficiency for the structure of the quantum dots intermediate band solar cell (QDIBSC is theoretically investigated and studied. The time-independent Schrödinger equation is used to determine the optimum width and location of the intermediate band. Accordingly, achievement of maximum efficiency by changing the width of quantum dots and barrier distances is studied. Theoretical determination of the power conversion efficiency under the two different ranges of QD width is presented. From the obtained results, the maximum power conversion efficiency is about 70.42% for simple cubic quantum dot crystal under full concentration light. It is strongly dependent on the width of quantum dots and barrier distances.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Maximum super angle optimization method for array antenna pattern synthesis
Wu, Ji; Roederer, A. G
1991-01-01
Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 20...
Valéria Pacheco Batista Euclides
2007-09-01
Full Text Available O objetivo deste trabalho foi avaliar o ganho de peso vivo, a capacidade de suporte e a eficiência bioeconômica em pastos de Panicum maximum, cultivar Tanzânia, com aplicação de uma segunda dose de adubação nitrogenada no final do verão. Anualmente foram aplicados em cobertura: 50, 17,48, e 33,2 kg ha-1 de N, P e K, respectivamente, em novembro. A metade da área recebeu 50 kg ha-1 de N adicional em março. Os tratamentos foram pastos de capim-tanzânia com 50 e 100 kg ha-1 de N. Os piquetes foram submetidos ao pastejo rotacionado. Foram utilizados quatro animais por piquete, e animais adicionais foram colocados e removidos para manter resíduos semelhantes pós-pastejo. Não houve efeito da adubação nitrogenada sobre o ganho médio diário. No entanto, o pasto adubado com 100 kg ha-1 de N (1,8 UA ha-1 resultou em maior capacidade de suporte e maior produtividade (780 kg ha-1 por ano de PV do que o adubado com 50 kg ha-1 de N (1,5 UA ha-1 e com 690 kg ha-1 por ano de PV, em média. A eficiência da conversão do N em produto animal foi de 1,8 kg de PV por hectare para cada quilograma adicional de N aplicado. O uso da adubação nitrogenada no final do verão é uma alternativa bioeconomicamente viável para a produção sustentável de carne.The objective of the work was to estimate animal live weight gain, the pasture carrying capacity, and the bioeconomic efficiency of Panicum maximum, cultivar Tanzânia pastures, with a second application of nitrogen fertilizer in the end of summer (March. Maintenance fertilizer was 50, 17.5 and 33.2 kg ha-1 of N, P and K, respectively, applied annually in November. Besides, in half of the area, an additional 50 kg ha-1 of N was applied in March. Treatments were tanzânia pastures with two levels of nitrogen fertilization, 50 and 100 kg ha-1. The paddocks were submitted to a rotational grazing. Four steers were kept in each paddock, and additional steers were allocated and removed to assure similar
Theoretical aspects evaluation of the effectiveness stimulation of innovative activity
N. S. Talalaeva
2013-01-01
Full Text Available In article theoretical bases research of innovative activity’s stimulation efficiency are considered, the essence of quantitative and qualitative approaches to an efficiency assessment in the innovative sphere is revealed.
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
Köcher, S. S.; Heydenreich, T.; Zhang, Y.; Reddy, G. N. M.; Caldarelli, S.; Yuan, H.; Glaser, S. J.
2016-04-01
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoretically predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.
Optimal design of the gerotor (2-ellipses) for reducing maximum contact stress
Kwak, Hyo Seo; Li, Sheng Huan [Dept. of Mechanical Convergence Technology, Pusan National University, Busan (Korea, Republic of); Kim, Chul [School of Mechanical Design and Manufacturing, Busan Institute of Science and Technology, Busan (Korea, Republic of)
2016-12-15
The oil pump, which is used as lubricator of engines and auto transmission, supplies working oil to the rotating elements to prevent wear. The gerotor pump is used widely in the automobile industry. When wear occurs due to contact between an inner rotor and an outer rotor, the efficiency of the gerotor pump decreases rapidly, and elastic deformation from the contacts also causes vibration and noise. This paper reports the optimal design of a gerotor with a 2-ellipses combined lobe shape that reduces the maximum contact stress. An automatic program was developed to calculate Hertzian contact stress of the gerotor using the Matlab and the effect of the design parameter on the maximum contact stress was analyzed. In addition, the method of theoretical analysis for obtaining the contact stress was verified by performing the fluid-structural coupled analysis using the commercial software, Ansys, considering both the driving force of the inner rotor and the fluid pressure, which is generated by working oil.
Maximum Energy Output of a DFIG Wind Turbine Using an Improved MPPT-Curve Method
Dinh-Chung Phan
2015-10-01
Full Text Available A new method is proposed for obtaining the maximum power output of a doubly-fed induction generator (DFIG wind turbine to control the rotor- and grid-side converters. The efficiency of maximum power point tracking that is obtained by the proposed method is theoretically guaranteed under assumptions that represent physical conditions. Several control parameters may be adjusted to ensure the quality of control performance. In particular, a DFIG state-space model and a control technique based on the Lyapunov function are adopted to derive the control method. The effectiveness of the proposed method is verified via numerical simulations of a 1.5-MW DFIG wind turbine using MATLAB/Simulink. The simulation results show that when the proposed method is used, the wind turbine is capable of properly tracking the optimal operation point; furthermore, the generator’s available energy output is higher when the proposed method is used than it is when the conventional method is used instead.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Mozharov, A. M.; Bolshakov, A. D.; Kudryashov, D. A.; Kryzhanovskaya, N. V.; Cirlin, G. E.; Mukhin, I. S.; Harmand, J. C.; Tchernysheva, M.
2015-11-01
In this letter we investigate photovoltaic properties of GaN nanowires (NWs) - Si substrate heterostructure obtained by molecular beam epitaxy (MBE). Antireflection properties of the NW array were studied theoretically and experimentally to show an order of magnitude enhancement in antireflection comparing to the pure Si surface (2.5% vs. 33.8%). In order to determine optimal morphology and doping levels of the structure with maximum possible efficiency we simulated it's properties using a finite difference method. The carried out simulation showed that a maximum efficiency should be 20%.
Improving irrigation efficiency will be insufficient to meet future water demand in the Nile Basin
S. Multsch
2017-08-01
We found that water savings from improved irrigation technology will not be able to meet the additional needs of planned areas. Under a theoretical scenario of maximum possible efficiency, the deficit would still be 5 km3yr−1. For more likely efficiency improvement scenarios, the deficit ranges between 23 and 29 km3yr−1. Our results suggest that improving irrigation efficiency may substantially contribute to decreasing water stress on the Nile system but would not completely meet the demand.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Sundarraj, Pradeepkumar; Taylor, Robert A.; Banerjee, Debosmita; Maity, Dipak; Sinha Roy, Susanta
2017-01-01
Hybrid solar thermoelectric generators (HSTEGs) have garnered significant research attention recently due to their potential ability to cogenerate heat and electricity. In this paper, theoretical and experimental investigations of the electrical and thermal performance of a HSTEG system are reported. In order to validate the theoretical model, a laboratory scale HSTEG system (based on forced convection cooling) is developed. The HSTEG consists of six thermoelectric generator modules, an electrical heater, and a stainless steel cooling block. Our experimental analysis shows that the HSTEG is capable of producing a maximum electrical power output of 4.7 W, an electrical efficiency of 1.2% and thermal efficiency of 61% for an average temperature difference of 92 °C across the TEG modules with a heater power input of 382 W. These experimental results of the HSTEG system are found to be in good agreement with the theoretical prediction. This experimental/theoretical analysis can also serve as a guide for evaluating the performance of the HSTEG system with forced convection cooling.
Machine learning a theoretical approach
Natarajan, Balas K
2014-01-01
This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation
Robust recognition via information theoretic learning
He, Ran; Yuan, Xiaotong; Wang, Liang
2014-01-01
This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Development of a Hybrid Ejector-Compressor Refrigeration System with Improved Efficiency
Gutiérrez Ortiz, Alejandro
2016-01-01
The present doctoral dissertation addresses the design of an ejector suitable for a thermally driven hybrid ejector-compressor cooling system; research was aimed at improving the performance of the ejector in terms of both critical backpressure and entrainment ratio. An ejector efficiency analysis is presented to establish a theoretical limit for the maximum achievable entrainment ratio of an ejector undergoing a fully reversible process without entropy generation; the main sources of irre...
Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion
Zongze Wu
2015-10-01
Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
Maximum Likelihood Localization of Radiation Sources with unknown Source Intensity
Baidoo-Williams, Henry E
2016-01-01
In this paper, we consider a novel and robust maximum likelihood approach to localizing radiation sources with unknown statistics of the source signal strength. The result utilizes the smallest number of sensors required theoretically to localize the source. It is shown, that should the source lie in the open convex hull of the sensors, precisely $N+1$ are required in $\\mathbb{R}^N, ~N \\in \\{1,\\cdots,3\\}$. It is further shown that the region of interest, the open convex hull of the sensors, is entirely devoid of false stationary points. An augmented gradient ascent algorithm with random projections should an estimate escape the convex hull is presented.
Exploiting Maximum Parallelism in Loop Using Heterogeneous Computing
ZENG Guosun
2001-01-01
In this paper, we present the defini-tion of maximum loop speedup, which is the metricof parallelism hidden in loop body. We also studythe classes of Do-loop and their dependence as wellas the parallelism they contain. How to exploit suchparallelism under heterogeneous computing environ-ment? The paper proposes several approaches, whichare eliminating serial bottleneck by means of heteroge-neous computing, heterogeneous Do-all-loop schedul-ing, heterogeneous Do-a-cross scheduling. We findthat, not only on theoretical analysis but also on ex-perimental results, these schemes acquire better per-formance than in homogeneous computing.
Maximum energy yield approach for CPV tracker design
Aldaiturriaga, E.; González, O.; Castro, M.
2012-10-01
Foton HC Systems has developed a new CPV tracker model, specially focused on its tracking efficiency and the effect of the tracker control techniques on the final energy yield of the system. This paper presents the theoretical work carried out into determining the energy yield for a CPV system, and illustrates the steps involved in calculating and understanding how energy consumption for tracking is opposed to tracker pointing errors. Additionally, the expressions to compute the optimum parameters are presented and discussed.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Blatt, John M
2010-01-01
A classic work by two leading physicists and scientific educators endures as an uncommonly clear and cogent investigation and correlation of key aspects of theoretical nuclear physics. It is probably the most widely adopted book on the subject. The authors approach the subject as ""the theoretical concepts, methods, and considerations which have been devised in order to interpret the experimental material and to advance our ability to predict and control nuclear phenomena.""The present volume does not pretend to cover all aspects of theoretical nuclear physics. Its coverage is restricted to
MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW
I. Elzein
2015-01-01
Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Order-theoretical connectivity
T. A. Richmond
1990-01-01
Full Text Available Order-theoretically connected posets are introduced and applied to create the notion of T-connectivity in ordered topological spaces. As special cases T-connectivity contains classical connectivity, order-connectivity, and link-connectivity.
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Theoretical and computational chemistry.
Meuwly, Markus
2010-01-01
Computer-based and theoretical approaches to chemical problems can provide atomistic understanding of complex processes at the molecular level. Examples ranging from rates of ligand-binding reactions in proteins to structural and energetic investigations of diastereomers relevant to organo-catalysis are discussed in the following. They highlight the range of application of theoretical and computational methods to current questions in chemical research.
Theoretical physics and astrophysics
Ginzburg, VL
1979-01-01
The aim of this book is to present, on the one hand various topics in theoretical physics in depth - especially topics related to electrodynamics - and on the other hand to show how these topics find applications in various aspects of astrophysics. The first text on theoretical physics and astrophysical applications, it covers many recent advances including those in X-ray, &ggr;-ray and radio-astronomy, with comprehensive coverage of the literature
Damasen Ikwaba Paul
2015-01-01
Full Text Available This paper presents theoretical and experimental optical evaluation and comparison of symmetric Compound Parabolic Concentrator (CPC and V-trough collector. For direct optical properties comparison, both concentrators were deliberately designed to have the same geometrical concentration ratio (1.96, aperture area, absorber area, and maximum concentrator length. The theoretical optical evaluation of the CPC and V-trough collector was carried out using a ray-trace technique while the experimental optical efficiency and solar energy flux distributions were analysed using an isolated cell PV module method. Results by simulation analysis showed that for the CPC, the highest optical efficiency was 95% achieved in the interval range of 0° to ±20° whereas the highest outdoor experimental optical efficiency was 94% in the interval range of 0° to ±20°. For the V-tough collector, the highest optical efficiency for simulation and outdoor experiments was about 96% and 93%, respectively, both in the interval range of 0° to ±5°. Simulation results also showed that the CPC and V-trough exhibit higher variation in non-illumination intensity distributions over the PV module surface for larger incidence angles than lower incidence angles. On the other hand, the maximum power output for the cells with concentrators varied depending on the location of the cell in the PV module.
Segmentation of low‐cost high efficiency oxide‐based thermoelectric materials
Le, Thanh Hung; Van Nong, Ngo; Linderoth, Søren;
2015-01-01
efficiency of TE oxides has been a major drawback limiting these materials to broaden applications. In this work, theoretical calculations are used to predict how segmentation of oxide and semimetal materials, utilizing the benefits of both types of materials, can provide high efficiency, high temperature...... segmented legs based p-type Ca3Co4O9 and n-type ZnO oxides excluding electrical and thermal losses. It is found that the maximum efficiency of segmented unicouple could be linearly decreased with increasing the interfacial contact resistance. The obtained results provide useful tool for designing a low...... oxide-based segmented legs. The materials for segmentation are selected by their compatibility factors and their conversion efficiency versus material cost, i.e., “efficiency ratio”. Numerical modelling results showed that conversion efficiency could reach values of more than 10% for unicouples using...
Gritti, Fabrice; Pynt, Jarrod; Soliven, Arianne; Dennis, Gary R; Shalliker, R Andrew; Guiochon, Georges
2014-03-14
The effects of column length on performance in segmented flow chromatography were tested. Column efficiencies were measured for 4.6mm I.D. 3, 5, 7.5 and 10 cm long columns packed with 3.0 μm Hypurity-C18 fully porous particles and of 4.6mm I.D. 5, 10, 15 and 25 cm long columns packed with 5 μm Hypersil GOLD C18 particles. For each column length and particle type, two different configurations were tested: (1) both the inlet and outlet column endfittings were standard and (2) the inlet endfitting was standard but the outlet endfitting allowed parallel segmentation of the exiting flow into a central and a peripheral coaxial region. The segmentation flow ratio was set at 45% (for 3 μm) and at 43% or 21% (for 5 μm). Four samples were used, naphthalene, toluene, butylbenzene, and insulin, which has a ten times smaller diffusion coefficient than the small molecules. The column performance for the low molecular weight compound is significantly improved at velocities above the optimum value when the outlet flow rate is segmented because longitudinal diffusion and mass transfer resistance of this compound in the stationary phase are negligible sources of band broadening at reduced linear velocities between 5 and 25. At high flow rate (4 mL/min), the long-range eddy dispersion terms are about 3.9, 3.2, 2.6, and 1.8h unit lower for the 3, 5, 7.5 and 10 cm long columns, respectively. The longer the column, the lower the efficiency improvement because the border effects are smaller. This result was not systematically observed for the columns packed with 5 μm particles because the transverse dispersion is larger. In contrast, the gain in column efficiency is marginal for insulin because the mass transfer mechanism of this compound is mostly controlled by the slow diffusivity of insulin across Hypurity-C18 particles.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Refines Efficiency Improvement
WRI
2002-05-15
Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as
Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)
2015-09-15
In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Payoff-monotonic game dynamics and the maximum clique problem.
Pelillo, Marcello; Torsello, Andrea
2006-05-01
Evolutionary game-theoretic models and, in particular, the so-called replicator equations have recently proven to be remarkably effective at approximately solving the maximum clique and related problems. The approach is centered around a classic result from graph theory that formulates the maximum clique problem as a standard (continuous) quadratic program and exploits the dynamical properties of these models, which, under a certain symmetry assumption, possess a Lyapunov function. In this letter, we generalize previous work along these lines in several respects. We introduce a wide family of game-dynamic equations known as payoff-monotonic dynamics, of which replicator dynamics are a special instance, and show that they enjoy precisely the same dynamical properties as standard replicator equations. These properties make any member of this family a potential heuristic for solving standard quadratic programs and, in particular, the maximum clique problem. Extensive simulations, performed on random as well as DIMACS benchmark graphs, show that this class contains dynamics that are considerably faster than and at least as accurate as replicator equations. One problem associated with these models, however, relates to their inability to escape from poor local solutions. To overcome this drawback, we focus on a particular subclass of payoff-monotonic dynamics used to model the evolution of behavior via imitation processes and study the stability of their equilibria when a regularization parameter is allowed to take on negative values. A detailed analysis of these properties suggests a whole class of annealed imitation heuristics for the maximum clique problem, which are based on the idea of varying the parameter during the imitation optimization process in a principled way, so as to avoid unwanted inefficient solutions. Experiments show that the proposed annealing procedure does help to avoid poor local optima by initially driving the dynamics toward promising regions in
张瑞娟; 张丽琍
2015-01-01
随着越来越多的女性进入工作场所和成为领导者，围绕领导风格和效能这一主题，衍生了大量性别视角下两性领导风格/效能的实证研究。根据对性别内涵的界定，现有研究的理论视角可以归纳为三个方面：心理视角、社会结构视角及人际交互视角。两性领导风格和效能的实证研究，主要围绕两性领导风格/效能存不存在差异，以及哪种领导风格更为有效等问题展开。尽管现有研究并未得出一致的结论，但较为稳定的结论是：在领导风格上，女性领导者更多地表现出民主、参与式及变革型领导行为；在领导效能上，男性领导者和女性领导者并不存在显著差异，只是在不同的背景条件下，二者的领导效能存在一定差异。这些研究不仅为后续性别视角下领导风格和效能的研究奠定了基础，而且为管理实践中女性领导者根据背景环境选择领导风格、发挥领导效能提供了借鉴。%As more and more women enter the workplace and take up leadership positions, leadership styles and effectiveness has attracted wide attention. Based on gender perspectives, there has been a great deal of empirical research into leadership style/efficiency. According to gender definitions, existing research can be summarized into three perspectives: psychological perspective, social structure and interpersonal perspective. A plethora of studies have been conducted examining how men and women differ from one another in their leadership style, behavior and efficiency. With no clear conclusion, research has found that women leaders prefer democracy, participation and transformation, while both men and women are no different in terms of effectiveness. Only in different context, the leadership efficiency has some differences. The study not only laid a foundation for more research, but also promotes the improvement and support of female leadership.
Hierarchical Maximum Margin Learning for Multi-Class Classification
Yang, Jian-Bo
2012-01-01
Due to myriads of classes, designing accurate and efficient classifiers becomes very challenging for multi-class classification. Recent research has shown that class structure learning can greatly facilitate multi-class learning. In this paper, we propose a novel method to learn the class structure for multi-class classification problems. The class structure is assumed to be a binary hierarchical tree. To learn such a tree, we propose a maximum separating margin method to determine the child nodes of any internal node. The proposed method ensures that two classgroups represented by any two sibling nodes are most separable. In the experiments, we evaluate the accuracy and efficiency of the proposed method over other multi-class classification methods on real world large-scale problems. The results show that the proposed method outperforms benchmark methods in terms of accuracy for most datasets and performs comparably with other class structure learning methods in terms of efficiency for all datasets.
A High Efficiency Boost Converter with MPPT Scheme for Low Voltage Thermoelectric Energy Harvesting
Guan, Mingjie; Wang, Kunpeng; Zhu, Qingyuan; Liao, Wei-Hsin
2016-11-01
Using thermoelectric elements to harvest energy from heat has been of great interest during the last decade. This paper presents a direct current-direct current (DC-DC) boost converter with a maximum power point tracking (MPPT) scheme for low input voltage thermoelectric energy harvesting applications. Zero current switch technique is applied in the proposed MPPT scheme. Theoretical analysis on the converter circuits is explored to derive the equations for parameters needed in the design of the boost converter. Simulations and experiments are carried out to verify the theoretical analysis and equations. A prototype of the designed converter is built using discrete components and a low-power microcontroller. The results show that the designed converter can achieve a high efficiency at low input voltage. The experimental efficiency of the designed converter is compared with a commercial converter solution. It is shown that the designed converter has a higher efficiency than the commercial solution in the considered voltage range.
A viable method for goodness-of-fit test in maximum likelihood fit
张锋; 高原宁; 霍雷
2011-01-01
A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood func
The Betz-Joukowsky limit for the maximum power coefficient of wind turbines
Okulov, Valery; van Kuik, G.A.M.
2009-01-01
The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...
Influence of Pareto optimality on the maximum entropy methods
Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.
2017-07-01
Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
An improved maximum power point tracking method for photovoltaic systems
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
A viable method for goodness-of-fit test in maximum likelihood fit
ZHANG Feng; GAO Yuan-Ning; HUO Lei
2011-01-01
A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Gueret, Robin; Castillo, Carmen E; Rebarz, Mateusz; Thomas, Fabrice; Hargrove, Aaron-Albert; Pécaut, Jacques; Sliwa, Michel; Fortage, Jérôme; Collomb, Marie-Noëlle
2015-11-01
We recently reported a very efficient homogeneous system for visible-light driven hydrogen production in water based on the cobalt(III) tetraaza-macrocyclic complex [Co(CR)Cl2](+) (1) (CR=2,12-dimethyl-3,7,11,17-tetra-azabicyclo(11.3.1)-heptadeca-1(17),2,11,13,15-pentaene) as a noble metal-free catalyst, with [Ru(II)(bpy)3](2+) (Ru) as photosensitizer and ascorbate/ascorbic acid (HA(-)/H2A) as a sacrificial electron donor and buffer (PhysChemChemPhys 2013, 15, 17544). This catalyst presents the particularity to achieve very high turnover numbers (TONs) (up to 1000) at pH 4.0 at a relative high concentration (0.1mM) generating a large amount of hydrogen and having a long term stability. A similar activity was observed for the aquo derivative [Co(III)(CR)(H2O)2](3+) (2) due to substitution of chloro ligands by water molecule in water. In this work, the geometry and electronic structures of 2 and its analog [Zn(II)(CR)Cl](+) (3) derivative containing the redox innocent Zn(II) metal ion have been investigated by DFT calculations under various oxidation states. We also further studied the photocatalytic activity of this system and evaluated the influence of varying the relative concentration of the different components on the H2-evolving activity. Turnover numbers versus catalyst (TONCat) were found to be dependent on the catalyst concentration with the highest value of 1130 obtained at 0.05 mM. Interestingly, the analogous nickel derivative, [Ni(II)(CR)Cl2] (4), when tested under the same experimental conditions was found to be fully inactive for H2 production. Nanosecond transient absorption spectroscopy measurements have revealed that the first electron-transfer steps of the photocatalytic H2-evolution mechanism with the Ru/cobalt tetraaza/HA(-)/H2A system involve a reductive quenching of the excited state of the photosensitizer by ascorbate (kq=2.5×10(7) M(-1) s(-1)) followed by an electron transfer from the reduced photosensitizer to the catalyst (ket=1.4×10(9) M
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Reflections on theoretical pragmatics
黄衍
2001-01-01
This paper provides a critical survey of theoretical pragmatics in contemporary linguistics. Among the topics that are addressed in the essay include the Anglo-American, and European Continental schools of thought;neo-Gricean pragmatic, and Relevance theories, the pragmatics-semantics interface; and the pragmatics-syntax interface.
Theoretical aspects of Chiral Dynamics
Leutwyler, H
2015-01-01
Many of the quantities of interest at the precision frontier in particle physics require a good understanding of the strong interaction at low energies. The present talk reviews the theoretical framework used in this context. In particular, I draw attention to the fact that applications of effective field theory methods in the low energy domain involve two different aspects: dependence of the quantities of interest on the quark masses and dependence on the momenta. While the lattice approach gives an excellent handle on the low energy constants that govern the quark mass dependence, the most efficient tool to pin down the momentum dependence is dispersion theory. At the same time, the dispersive analysis enlarges the energy range where the effective theory applies. In the meson sector, the interplay of the various sources of information has led to a coherent framework that describes the low energy structure at remarkably high resolution. The understanding of the low energy properties in the baryon sector is l...
Theoretical information reuse and integration
Rubin, Stuart
2016-01-01
Information Reuse and Integration addresses the efficient extension and creation of knowledge through the exploitation of Kolmogorov complexity in the extraction and application of domain symmetry. Knowledge, which seems to be novel, can more often than not be recast as the image of a sequence of transformations, which yield symmetric knowledge. When the size of those transformations and/or the length of that sequence of transforms exceeds the size of the image, then that image is said to be novel or random. It may also be that the new knowledge is random in that no such sequence of transforms, which produces it exists, or is at least known. The nine chapters comprising this volume incorporate symmetry, reuse, and integration as overt operational procedures or as operations built into the formal representations of data and operators employed. Either way, the aforementioned theoretical underpinnings of information reuse and integration are supported.
Cohen, Andrew [Boston Univ., MA (United States); Schmaltz, Martin [Boston Univ., MA (United States); Katz, Emmanuel [Boston Univ., MA (United States); Rebbi, Claudio [Boston Univ., MA (United States); Glashow, Sheldon [Boston Univ., MA (United States); Brower, Richard [Boston Univ., MA (United States); Pi, So-Young [Boston Univ., MA (United States)
2016-09-30
interactions between quark and gluon particles, we have no clear idea how to express the proton state in terms of these quarks and gluons. This is because the proton, though a bound state of quarks and gluons, is not a state of a fixed number of particles due to strong interactions. Yet, understanding the proton state is very important in order to theoretically predict the reaction rates observed at the LHC in Geneva, which is a proton-proton collider. Katz has formulated a new approach to QFT, which among other things offers a way to adequately approximate the quantum wave function of a bound state at strong coupling. The approximation scheme is related to the fact that any sensible QFT (including that of the strong interactions) is at short distances approximately self-similar upon rescaling of space and time. It turns out that keeping track of the response upon this rescaling is important in efficiently parameterizing the state. Katz and collaborators have used this observation to approximate the state of the proton in toy versions of the strong force. In the late 60s Sheldon Glashow, Abdus Salam and Steven Weinberg (1979 Nobel Prize awardees) proposed a theory unifying weak and electromagnetic interaction which assumed the existence of new particles, the W and Z bosons. The W and Z bosons were eventually detected in high-energy collision in a particle accelerator at CERN, and the recent discovery of the Higgs meson at the Large Hadron Collider (LHC), always at CERN, completed the picture. However, deep theoretical considerations indicate that the theory by Glashow, Weinberg and Salam, often referred to as "the standard model" cannot be the whole story: the existence of new particles and new interactions at yet higher energies is widely anticipated. The experiments at the LHC are looking for these, while theorists, like Brower, Rebbi and collaborators, are investigating models for these new interactions. Working in a large national collaboration with access to the most
Maximum-power quantum-mechanical Carnot engine.
Abe, Sumiyoshi
2011-04-01
In their work [J. Phys. A 33, 4427 (2000)], Bender, Brody, and Meister have shown by employing a two-state model of a particle confined in the one-dimensional infinite potential well that it is possible to construct a quantum-mechanical analog of the Carnot engine through changes of both the width of the well and the quantum state in a specific manner. Here, a discussion is developed about realizing the maximum power of such an engine, where the width of the well moves at low but finite speed. The efficiency of the engine at the maximum power output is found to be universal independently of any of the parameters contained in the model.
Prediction of Double Layer Grids' Maximum Deflection Using Neural Networks
Reza K. Moghadas
2008-01-01
Full Text Available Efficient neural networks models are trained to predict the maximum deflection of two-way on two-way grids with variable geometrical parameters (span and height as well as cross-sectional areas of the element groups. Backpropagation (BP and Radial Basis Function (RBF neural networks are employed for the mentioned purpose. The inputs of the neural networks are the length of the spans, L, the height, h and cross-sectional areas of the all groups, A and the outputs are maximum deflections of the corresponding double layer grids, respectively. The numerical results indicate that the RBF neural network is better than BP in terms of training time and performance generality.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Theoretical analyses of testing efficiency in long-term breeding of poplar%杨树长期育种中遗传测定效率理论分析
李火根; 达格·林德格林; 大流士·丹乌史威丘斯; 崔建国
2005-01-01
. In focus was a comparison between three different testing scenarios for selecting the parents mated to create future breeding generations, thus selecting based on phenotype, clone test or progeny test. For the main scenario, the highest GMG/Y, and the optimal selection age for clone, phenotype and progeny strategies were 0.7480 %, 0.6989% and 0.4675%; 7, 6, and 11 years respectively. Clone test was best except when heritability was high, plant price was high or total budget was low; phenotype strategy was the second except for the case of extremely low narrow-sense heritability, for which the progeny strategy was a little more efficient than phenotype strategy. GMG/Y was markedly affected by narrow-sense heritability, additive variance at mature age, rotation age, plant-dependent cost, total budget and the time needed to produce the test plants, while diversity loss and recombination cost had rather weak effect on GMG/Y. Short rotation age and cheap testing cost favoured all three testing strategies. Comparably short rotation age, low plant-dependent cost and high total budget seem to promote early selection for progeny strategy.
Measuring of the maximum measurable velocity for dual-frequency laser interferometer
Zhiping Zhang; Zhaogu Cheng; Zhaoyu Qin; Jianqiang Zhu
2007-01-01
There is an increasing demand on the measurable velocity of laser interferometer in manufacturing technologies. The maximum measurable velocity is limited by frequency difference of laser source, optical configuration, and electronics bandwidth. An experimental setup based on free falling movement has been demonstrated to measure the maximum easurable velocity for interferometers. Measurement results show that the maximum measurable velocity is less than its theoretical value. Moreover, the effect of kinds of factors upon the measurement results is analyzed, and the results can offer a reference for industrial applications.
Sustainable and efficient biohydrogen production via electrohydrogenesis.
Cheng, Shaoan; Logan, Bruce E
2007-11-20
Hydrogen gas has tremendous potential as an environmentally acceptable energy carrier for vehicles, but most hydrogen is generated from nonrenewable fossil fuels such as natural gas. Here, we show that efficient and sustainable hydrogen production is possible from any type of biodegradable organic matter by electrohydrogenesis. In this process, protons and electrons released by exoelectrogenic bacteria in specially designed reactors (based on modifying microbial fuel cells) are catalyzed to form hydrogen gas through the addition of a small voltage to the circuit. By improving the materials and reactor architecture, hydrogen gas was produced at yields of 2.01-3.95 mol/mol (50-99% of the theoretical maximum) at applied voltages of 0.2 to 0.8 V using acetic acid, a typical dead-end product of glucose or cellulose fermentation. At an applied voltage of 0.6 V, the overall energy efficiency of the process was 288% based solely on electricity applied, and 82% when the heat of combustion of acetic acid was included in the energy balance, at a gas production rate of 1.1 m(3) of H(2) per cubic meter of reactor per day. Direct high-yield hydrogen gas production was further demonstrated by using glucose, several volatile acids (acetic, butyric, lactic, propionic, and valeric), and cellulose at maximum stoichiometric yields of 54-91% and overall energy efficiencies of 64-82%. This electrohydrogenic process thus provides a highly efficient route for producing hydrogen gas from renewable and carbon-neutral biomass resources.
Sustainable and efficient biohydrogen production via electrohydrogenesis
Cheng, S.; Logan, B.E. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Civil and Environmental Engineering
2007-11-20
Hydrogen gas has tremendous potential as an environmentally acceptable energy carrier for vehicles, but most hydrogen is generated from nonrenewable fossil fuels such as natural gas. Here, the authors show that efficient and sustainable hydrogen production is possible from any type of biodegradable organic matter by electrohydrogenesis. In this process, protons and electrons released by exoelectrogenic bateria in specially designed reactors (based on modifying microbial fuel cells) are catalyzed to form hydrogen gas through the addition of a small voltage to the circuit. By improving the materials and reactor architecture, hydrogen gas was produced at yields of 2.01-3.95 mol/mol (50-99% of the theoretical maximum) at applied voltages of 0.2 to 0.8 V using acetic acid, a typical dead-end product of glucose or cellulose fermentation. At an applied voltage of 0.6 V, the overall energy efficiency of the process was 288% based solely on electricity applied, and 82% when the heat of combusion of acetic acid was included in the energy balance, at a gas production rate of 1.1 m{sup 3} of H{sub 2} per cubic meter of reactor per day. Direct high-yield hydrogen gas production was further demonstrated by using glucose, several volatile acids (acetic, butyric, lactic, propionic, and valeric), and cellulose at maximum stoichiometric yields of 54-91% and overall energy efficiencies of 64-82%. This electrohydrogenic process thus provides a highly efficient route for producting hydrogen gas from renewable and carbon-neutral biomass resources.
Knissel, Jens; Grossklos, Marc [Institut Wohnen und Umwelt GmbH, Darmstadt (Germany); Werner, Johannes [Ingenieurbuero fuer Energieberatung, Haustechnik und Oekologische Konzepte GbR (eboek), Tuebingen (Germany)
2011-05-15
In energy-efficient buildings with mechanical ventilation and heat recovery, the heat losses of ventilation of a building can be influenced by additional open windows. This causes a significant rise in the heating demand. The Drd method (pressure difference method) assumes that the negative pressure in a building after switching off the supply air fan will depend on whether all windows are closed, or at least one window is open. In the research project under consideration the determination of the position of the window aperture using the Drd method shall be developed further. The operating conditions of the Drd method is investigated theoretically. Questions of the required building tightness and plant characteristics are clarified.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Friedrich, Harald
2017-01-01
This expanded and updated well-established textbook contains an advanced presentation of quantum mechanics adapted to the requirements of modern atomic physics. It includes topics of current interest such as semiclassical theory, chaos, atom optics and Bose-Einstein condensation in atomic gases. In order to facilitate the consolidation of the material covered, various problems are included, together with complete solutions. The emphasis on theory enables the reader to appreciate the fundamental assumptions underlying standard theoretical constructs and to embark on independent research projects. The fourth edition of Theoretical Atomic Physics contains an updated treatment of the sections involving scattering theory and near-threshold phenomena manifest in the behaviour of cold atoms (and molecules). Special attention is given to the quantization of weakly bound states just below the continuum threshold and to low-energy scattering and quantum reflection just above. Particular emphasis is laid on the fundamen...
Compendium of theoretical physics
Wachter, Armin
2006-01-01
Mechanics, Electrodynamics, Quantum Mechanics, and Statistical Mechanics and Thermodynamics comprise the canonical undergraduate curriculum of theoretical physics. In Compendium of Theoretical Physics, Armin Wachter and Henning Hoeber offer a concise, rigorous and structured overview that will be invaluable for students preparing for their qualifying examinations, readers needing a supplement to standard textbooks, and research or industrial physicists seeking a bridge between extensive textbooks and formula books. The authors take an axiomatic-deductive approach to each topic, starting the discussion of each theory with its fundamental equations. By subsequently deriving the various physical relationships and laws in logical rather than chronological order, and by using a consistent presentation and notation throughout, they emphasize the connections between the individual theories. The reader’s understanding is then reinforced with exercises, solutions and topic summaries. Unique Features: Every topic is ...
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
Electrochemical kinetics theoretical aspects
Vetter, Klaus J
1967-01-01
Electrochemical Kinetics: Theoretical Aspects focuses on the processes, methodologies, reactions, and transformations in electrochemical kinetics. The book first offers information on electrochemical thermodynamics and the theory of overvoltage. Topics include equilibrium potentials, concepts and definitions, electrical double layer and electrocapillarity, and charge-transfer, diffusion, and reaction overvoltage. Crystallization overvoltage, total overvoltage, and resistance polarization are also discussed. The text then examines the methods of determining electrochemical reaction mechanisms
Silicene: Recent theoretical advances
Lew Yan Voon, L. C.
2016-04-14
Silicene is a two-dimensional allotrope of silicon with a puckered hexagonal structure closely related to the structure of graphene and that has been predicted to be stable. To date, it has been successfully grown in solution (functionalized) and on substrates. The goal of this review is to provide a summary of recent theoretical advances in the properties of both free-standing silicene as well as in interaction with molecules and substrates, and of proposed device applications.
MARKETING MIX THEORETICAL ASPECTS
Margarita Išoraitė
2016-01-01
Aim of article is to analyze marketing mix theoretical aspects. The article discusses that marketing mix is one of the main objectives of the marketing mix elements for setting objectives and marketing budget measures. The importance of each element depends not only on the company and its activities, but also on the competition and time. All marketing elements are interrelated and should be seen in the whole of their actions. Some items may have greater importance than others; it depends main...
Theoretical numerical analysis
Wendroff, Burton
1966-01-01
Theoretical Numerical Analysis focuses on the presentation of numerical analysis as a legitimate branch of mathematics. The publication first elaborates on interpolation and quadrature and approximation. Discussions focus on the degree of approximation by polynomials, Chebyshev approximation, orthogonal polynomials and Gaussian quadrature, approximation by interpolation, nonanalytic interpolation and associated quadrature, and Hermite interpolation. The text then ponders on ordinary differential equations and solutions of equations. Topics include iterative methods for nonlinear systems, matri
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Theoretical Developments in SUSY
Shifman, M.
2009-01-01
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I will review theoretical developments of the recent years in non-perturbative supersymmetry.
Theoretical developments in SUSY
Shifman, M. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2009-01-15
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I review theoretical developments of the recent years in non-perturbative supersymmetry. (orig.)
Park, Hyunbin; Sim, Minseob; Kim, Shiho
2015-06-01
We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.
Working Hard and Working Smart: Motivation and Ability during Typical and Maximum Performance
Klehe, Ute-Christine; Anderson, Neil
2007-01-01
The distinction between what people "can" do (maximum performance) and what they "will" do (typical performance) has received considerable theoretical but scant empirical attention in industrial-organizational psychology. This study of 138 participants performing an Internet-search task offers an initial test and verification of P. R. Sackett, S.…
Attainability of Carnot efficiency with autonomous engines.
Shiraishi, Naoto
2015-11-01
The maximum efficiency of autonomous engines with a finite chemical potential difference is investigated. We show that, without a particular type of singularity, autonomous engines cannot attain the Carnot efficiency. This singularity is realized in two ways: single particle transports and the thermodynamic limit. We demonstrate that both of these ways actually lead to the Carnot efficiency in concrete setups. Our results clearly illustrate that the singularity plays a crucial role in the maximum efficiency of autonomous engines.
Attainability of Carnot efficiency with autonomous engines
Shiraishi, Naoto
2015-11-01
The maximum efficiency of autonomous engines with a finite chemical potential difference is investigated. We show that, without a particular type of singularity, autonomous engines cannot attain the Carnot efficiency. This singularity is realized in two ways: single particle transports and the thermodynamic limit. We demonstrate that both of these ways actually lead to the Carnot efficiency in concrete setups. Our results clearly illustrate that the singularity plays a crucial role in the maximum efficiency of autonomous engines.
Maximum Entropy Production and Non-Gaussian Climate Variability
Sura, Philip
2016-01-01
Earth's atmosphere is in a state far from thermodynamic equilibrium. For example, the large scale equator-to-pole temperature gradient is maintained by tropical heating, polar cooling, and a midlatitude meridional eddy heat flux predominantly driven by baroclinically unstable weather systems. Based on basic thermodynamic principles, it can be shown that the meridional heat flux, in combination with the meridional temperature gradient, acts to maximize entropy production of the atmosphere. In fact, maximum entropy production (MEP) has been successfully used to explain the observed mean state of the atmosphere and other components of the climate system. However, one important feature of the large scale atmospheric circulation is its often non-Gaussian variability about the mean. This paper presents theoretical and observational evidence that some processes in the midlatitude atmosphere are significantly non-Gaussian to maximize entropy production. First, after introducing the basic theory, it is shown that the ...
Arbutina Bojan
2011-01-01
Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.
Adaptive edge image enhancement based on maximum fuzzy entropy
ZHANG Xiu-hua; YANG Kun-tao
2006-01-01
Based on the maximum fuzzy entropy principle,the edge image with low contrast is optimally classified into two classes adaptively,under the condition of probability partition and fuzzy partition.The optimal threshold is used as the classified threshold value,and a local parametric gray-level transformation is applied to the obtained classes.By means of two parameters representing,the homogeneity of the regions in edge image is improved.The excellent performance of the proposed technique is exercisable through simulation results on a set of test images.It is shown how the extracted and enhanced edges provide an efficient edge-representation of images.It is shown that the proposed technique possesses excellent performance in homogeneity through simulations on a set of test images,and the extracted and enhanced edges provide an efficient edge-representation of images.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Meng-Hui Wang
2015-08-01
Full Text Available Sliding mode strategy (SMS for maximum power point tracking (MPPT is used in this study of a human power generation system. This approach ensures maximum power at different rotation speeds to increase efficiency and corrects for the lack of robustness in traditional methods. The intelligent extension theory is used to reduce input saturation and high frequency switching in sliding mode strategy, as well as to increase the efficiency and response speed. The experimental results show that the efficiency of the extension SMS (ESMS is 5% higher than in traditional SMS, and the response is 0.5 s faster.
Theoretical Astrophysics at Fermilab
2004-01-01
The Theoretical Astrophysics Group works on a broad range of topics ranging from string theory to data analysis in the Sloan Digital Sky Survey. The group is motivated by the belief that a deep understanding of fundamental physics is necessary to explain a wide variety of phenomena in the universe. During the three years 2001-2003 of our previous NASA grant, over 120 papers were written; ten of our postdocs went on to faculty positions; and we hosted or organized many workshops and conferences. Kolb and collaborators focused on the early universe, in particular and models and ramifications of the theory of inflation. They also studied models with extra dimensions, new types of dark matter, and the second order effects of super-horizon perturbations. S tebbins, Frieman, Hui, and Dodelson worked on phenomenological cosmology, extracting cosmological constraints from surveys such as the Sloan Digital Sky Survey. They also worked on theoretical topics such as weak lensing, reionization, and dark energy. This work has proved important to a number of experimental groups [including those at Fermilab] planning future observations. In general, the work of the Theoretical Astrophysics Group has served as a catalyst for experimental projects at Fennilab. An example of this is the Joint Dark Energy Mission. Fennilab is now a member of SNAP, and much of the work done here is by people formerly working on the accelerator. We have created an environment where many of these people made transition from physics to astronomy. We also worked on many other topics related to NASA s focus: cosmic rays, dark matter, the Sunyaev-Zel dovich effect, the galaxy distribution in the universe, and the Lyman alpha forest. The group organized and hosted a number of conferences and workshop over the years covered by the grant. Among them were:
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
Exact parallel maximum clique algorithm for general and protein graphs.
Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka
2013-09-23
A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.
Institute for Theoretical Physics
Giddings, S.B.; Ooguri, H.; Peet, A.W.; Schwarz, J.H.
1998-06-01
String theory is the only serious candidate for a unified description of all known fundamental particles and interactions, including gravity, in a single theoretical framework. Over the past two years, activity in this subject has grown rapidly, thanks to dramatic advances in understanding the dynamics of supersymmetric field theories and string theories. The cornerstone of these new developments is the discovery of duality which relates apparently different string theories and transforms difficult strongly coupled problems of one theory into weakly coupled problems of another theory.
Theoretical astrophysics an introduction
Bartelmann, Matthias
2013-01-01
A concise yet comprehensive introduction to the central theoretical concepts of modern astrophysics, presenting hydrodynamics, radiation, and stellar dynamics all in one textbook. Adopting a modular structure, the author illustrates a small number of fundamental physical methods and principles, which are sufficient to describe and understand a wide range of seemingly very diverse astrophysical phenomena and processes. For example, the formulae that define the macroscopic behavior of stellar systems are all derived in the same way from the microscopic distribution function. This function it
Shivamoggi, Bhimsen K
1998-01-01
"Although there are many texts and monographs on fluid dynamics, I do not know of any which is as comprehensive as the present book. It surveys nearly the entire field of classical fluid dynamics in an advanced, compact, and clear manner, and discusses the various conceptual and analytical models of fluid flow." - Foundations of Physics on the first edition. Theoretical Fluid Dynamics functions equally well as a graduate-level text and a professional reference. Steering a middle course between the empiricism of engineering and the abstractions of pure mathematics, the author focuses
Theoretical Optics An Introduction
Römer, Hartmann
2004-01-01
Starting from basic electrodynamics, this volume provides a solid, yet concise introduction to theoretical optics, containing topics such as nonlinear optics, light-matter interaction, and modern topics in quantum optics, including entanglement, cryptography, and quantum computation. The author, with many years of experience in teaching and research, goes way beyond the scope of traditional lectures, enabling readers to keep up with the current state of knowledge. Both content and presentation make it essential reading for graduate and phD students as well as a valuable reference for researche
Theoretical solid state physics
Haug, Albert
2013-01-01
Theoretical Solid State Physics, Volume 1 focuses on the study of solid state physics. The volume first takes a look at the basic concepts and structures of solid state physics, including potential energies of solids, concept and classification of solids, and crystal structure. The book then explains single-electron approximation wherein the methods for calculating energy bands; electron in the field of crystal atoms; laws of motion of the electrons in solids; and electron statistics are discussed. The text describes general forms of solutions and relationships, including collective electron i
Stimulus-dependent maximum entropy models of neural population codes.
Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Stimulus-dependent maximum entropy models of neural population codes.
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
On the Threshold of Maximum-Distance Separable Codes
Kindarji, Bruno; Chabanne, Hervé
2010-01-01
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt'09, this paper deals with the threshold of linear $q$-ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible? We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.
Neutrino Mixing: Theoretical Overview
Altarelli, Guido
2013-01-01
We present a concise review of the recent important experimental developments on neutrino mixing (hints for sterile neutrinos, large $\\theta_{13}$, possible non maximal $\\theta_{23}$, approaching sensitivity on $\\delta_{CP}$) and their implications on models of neutrino mixing. The new data disfavour many models but the surviving ones still span a wide range going from Anarchy (no structure, no symmetry in the lepton sector) to a maximum of symmetry, as for the models based on discrete non-abelian flavour groups that can be improved following the indications from the data.
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
Maximum entropy models of ecosystem functioning
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
On the thermodynamic efficiency of Ca²⁺-ATPase molecular machines.
Lervik, Anders; Bresme, Fernando; Kjelstrup, Signe; Rubí, J Miguel
2012-09-19
Experimental studies have shown that the activity of the reconstituted molecular pump Ca(2+)-ATPase strongly depends on the thickness of the supporting bilayer. It is thus expected that the bilayer structure will have an impact on the thermodynamic efficiency of this nanomachine. Here, we introduce a nonequilibrium-thermodynamics theoretical approach to estimate the thermodynamic efficiency of the Ca(2+)-ATPase from analysis of available experimental data about ATP hydrolysis and Ca(2+) transport. We find that the entropy production, i.e., the heat released to the surroundings under working conditions, is approximately constant for bilayers containing phospholipids with hydrocarbon chains of 18-22 carbon atoms. Our estimates for the heat released during the pump operation agree with results obtained from separate calorimetric experiments on the Ca(2+)-ATPase derived from sarcoplasmic reticulum. We show that the thermodynamic efficiency of the reconstituted Ca(2+)-ATPase reaches a maximum for bilayer thicknesses corresponding to maximum activity. Surprisingly, the estimated thermodynamic efficiency is very low, ∼12%. We discuss the significance of this result as representative of the efficiency of other nanomachines, and we address the influence of the experimental set-up on such a low efficiency. Overall, our approach provides a general route to estimate thermodynamic efficiencies and heat dissipation in experimental studies of nanomachines.
A game theoretic approach for trading discharge permits in rivers.
Niksokhan, Mohammad Hossein; Kerachian, Reza; Karamouz, Mohammad
2009-01-01
In this paper, a new Cooperative Trading Discharge Permit (CTDP) methodology is designed for estimating equitable and efficient treatment cost allocation among dischargers in a river system considering their conflicting interests. The methodology consists of two main steps: (1) initial treatment cost allocation and (2) equitable treatment cost reallocation. In the first step, a Pareto front among objectives is developed using a powerful and recently developed multi-objective genetic algorithm known as Nondominated Sorting Genetic Algorithm-II (NSGA-II). The objectives of the optimization model are considered to be the average treatment level of dischargers and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. The best non-dominated solution on the Pareto front, which provides the initial cost allocation to dischargers, is selected using the Young Bargaining Theory (YBT). In the second step, some cooperative game theoretic approaches are utilized to investigate how the maximum saving cost of participating dischargers in a coalition can be fairly allocated to them. The final treatment cost allocation provides the optimal trading discharge permit policies. The practical utility of the proposed methodology for river water quality management is illustrated through a realistic case study of the Zarjub river in the northern part of Iran.
Theoretical and Experimental Spectroscopic Analysis of Cyano-Substituted Styrylpyridine Compounds
Jorge Lopez-Cruz
2013-02-01
Full Text Available A combined theoretical and experimental study on the structure, infrared, UV-Vis and 1H NMR data of trans-2-(m-cyanostyrylpyridine, trans-2-[3-methyl-(m-cyanostyryl]pyridine and trans-4-(m-cyanostyrylpyridine is presented. The synthesis was carried out with an efficient Knoevenagel condensation using green chemistry conditions. Theoretical geometry optimizations and their IR spectra were carried out using the Density Functional Theory (DFT in both gas and solution phases. For theoretical UV-Vis and 1H NMR spectra, the Time-Dependent DFT (TD-DFT and the Gauge-Including Atomic Orbital (GIAO methods were used, respectively. The theoretical characterization matched the experimental measurements, showing a good correlation. The effect of cyano- and methyl- substituents, as well as of the N-atom position in the pyridine ring on the UV-Vis, IR and NMR spectra, was evaluated. The UV-Vis results showed no significant effect due to electron-withdrawing cyano- and electron-donating methyl-substituents. The N-atom position, however, caused a slight change in the maximum absorption wavelengths. The IR normal modes were assigned for the cyano- and methyl-groups. 1H NMR spectra showed the typical doublet signals due to protons in the trans position of a double bond. The theoretical characterization was visibly useful to assign accurately the signals in IR and 1H NMR spectra, as well as to identify the most probable conformation that could be present in the formation of the styrylpyridine-like compounds.
Theoretical and experimental spectroscopic analysis of cyano-substituted styrylpyridine compounds.
Castro, Maria Eugenia; Percino, Maria Judith; Chapela, Victor M; Ceron, Margarita; Soriano-Moro, Guillermo; Lopez-Cruz, Jorge; Melendez, Francisco J
2013-02-18
A combined theoretical and experimental study on the structure, infrared, UV-Vis and 1H NMR data of trans-2-(m-cyanostyryl)pyridine, trans-2-[3-methyl-(m-cyanostyryl)]pyridine and trans-4-(m-cyanostyryl)pyridine is presented. The synthesis was carried out with an efficient Knoevenagel condensation using green chemistry conditions. Theoretical geometry optimizations and their IR spectra were carried out using the Density Functional Theory (DFT) in both gas and solution phases. For theoretical UV-Vis and 1H NMR spectra, the Time-Dependent DFT (TD-DFT) and the Gauge-Including Atomic Orbital (GIAO) methods were used, respectively. The theoretical characterization matched the experimental measurements, showing a good correlation. The effect of cyano- and methyl- substituents, as well as of the N-atom position in the pyridine ring on the UV-Vis, IR and NMR spectra, was evaluated. The UV-Vis results showed no significant effect due to electron-withdrawing cyano- and electron-donating methyl-substituents. The N-atom position, however, caused a slight change in the maximum absorption wavelengths. The IR normal modes were assigned for the cyano- and methyl-groups. 1H NMR spectra showed the typical doublet signals due to protons in the trans position of a double bond. The theoretical characterization was visibly useful to assign accurately the signals in IR and 1H NMR spectra, as well as to identify the most probable conformation that could be present in the formation of the styrylpyridine-like compounds.
Theoretical and Experimental Spectroscopic Analysis of Cyano-Substituted Styrylpyridine Compounds
Castro, Maria Eugenia; Percino, Maria Judith; Chapela, Victor M.; Ceron, Margarita; Soriano-Moro, Guillermo; Lopez-Cruz, Jorge; Melendez, Francisco J.
2013-01-01
A combined theoretical and experimental study on the structure, infrared, UV-Vis and 1H NMR data of trans-2-(m-cyanostyryl)pyridine, trans-2-[3-methyl-(m-cyanostyryl)] pyridine and trans-4-(m-cyanostyryl)pyridine is presented. The synthesis was carried out with an efficient Knoevenagel condensation using green chemistry conditions. Theoretical geometry optimizations and their IR spectra were carried out using the Density Functional Theory (DFT) in both gas and solution phases. For theoretical UV-Vis and 1H NMR spectra, the Time-Dependent DFT (TD-DFT) and the Gauge-Including Atomic Orbital (GIAO) methods were used, respectively. The theoretical characterization matched the experimental measurements, showing a good correlation. The effect of cyano- and methyl-substituents, as well as of the N-atom position in the pyridine ring on the UV-Vis, IR and NMR spectra, was evaluated. The UV-Vis results showed no significant effect due to electron-withdrawing cyano- and electron-donating methyl-substituents. The N-atom position, however, caused a slight change in the maximum absorption wavelengths. The IR normal modes were assigned for the cyano- and methyl-groups. 1H NMR spectra showed the typical doublet signals due to protons in the trans position of a double bond. The theoretical characterization was visibly useful to assign accurately the signals in IR and 1H NMR spectra, as well as to identify the most probable conformation that could be present in the formation of the styrylpyridine-like compounds. PMID:23429190
Theoretical Particle Astrophysics
Kamionkowski, Marc
2013-08-07
Abstract: Theoretical Particle Astrophysics The research carried out under this grant encompassed work on the early Universe, dark matter, and dark energy. We developed CMB probes for primordial baryon inhomogeneities, primordial non-Gaussianity, cosmic birefringence, gravitational lensing by density perturbations and gravitational waves, and departures from statistical isotropy. We studied the detectability of wiggles in the inflation potential in string-inspired inflation models. We studied novel dark-matter candidates and their phenomenology. This work helped advance the DoE's Cosmic Frontier (and also Energy and Intensity Frontiers) by finding synergies between a variety of different experimental efforts, by developing new searches, science targets, and analyses for existing/forthcoming experiments, and by generating ideas for new next-generation experiments.
Theoretical physics 5 thermodynamics
Nolting, Wolfgang
2017-01-01
This concise textbook offers a clear and comprehensive introduction to thermodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, defining macroscopic variables, such as internal energy, entropy and pressure,together with thermodynamic principles. The first part of the book introduces the laws of thermodynamics and thermodynamic potentials. More complex themes are covered in the second part of the book, which describes phases and phase transitions in depth. Ideally suited to undergraduate students with some grounding in classical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this series cove...
Theoretical Molecular Biophysics
Scherer, Philipp
2010-01-01
"Theoretical Molecular Biophysics" is an advanced study book for students, shortly before or after completing undergraduate studies, in physics, chemistry or biology. It provides the tools for an understanding of elementary processes in biology, such as photosynthesis on a molecular level. A basic knowledge in mechanics, electrostatics, quantum theory and statistical physics is desirable. The reader will be exposed to basic concepts in modern biophysics such as entropic forces, phase separation, potentials of mean force, proton and electron transfer, heterogeneous reactions coherent and incoherent energy transfer as well as molecular motors. Basic concepts such as phase transitions of biopolymers, electrostatics, protonation equilibria, ion transport, radiationless transitions as well as energy- and electron transfer are discussed within the frame of simple models.
Social Security: Theoretical Aspects
O. I. Kashnik
2013-01-01
Full Text Available The paper looks at the phenomena of security and social security from the philosophical, sociological and psychological perspective. The undertaken analysis of domestic and foreign scientific materials demonstrates the need for interdisciplinary studies, including pedagogy and education, aimed at developing the guidelines for protecting the social system from destruction. The paper defines the indicators, security level indices and their assessment methods singled out from the analytical reports and security studies by the leading Russian sociological centers and international expert organizations, including the United Nations.The research is aimed at finding out the adequate models of personal and social security control systems at various social levels. The theoretical concepts can be applied by the teachers of the Bases of Life Safety course, the managers and researches developing the assessment criteria and security indices of educational environment evaluation, as well as the methods of diagnostics and expertise of educational establishments from the security standpoint.
Theoretical physics 3 electrodynamics
Nolting, Wolfgang
2016-01-01
This textbook offers a clear and comprehensive introduction to electrodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series. The first part of the book describes the interaction of electric charges and magnetic moments by introducing electro- and magnetostatics. The second part of the book establishes deeper understanding of electrodynamics with the Maxwell equations, quasistationary fields and electromagnetic fields. All sections are accompanied by a detailed introduction to the math needed. Ideally suited to undergraduate students with some grounding in classical and analytical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful Germa...
Asymptotic properties of maximum likelihood estimators in models with multiple change points
He, Heping; 10.3150/09-BEJ232
2011-01-01
Models with multiple change points are used in many fields; however, the theoretical properties of maximum likelihood estimators of such models have received relatively little attention. The goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within-segment distributions is also derived. Since the approach used in single change-point models is not easily extended to multiple change-point models, these results require the introduction of those tools for analyzing the likelihood function in a multiple change-point model.
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Favarel, C.; Champier, D.; Bédécarrats, J. P.; Kousksou, T.; Strub, F.
2012-06-01
According to the International Energy Agency, 1.4 billion people are without electricity in the poorest countries and 2.5 billion people rely on biomass to meet their energy needs for cooking in developing countries. The use of cooking stoves equipped with small thermoelectric generator to provide electricity for basic needs (LED, cell phone and radio charging device) is probably a solution for houses far from the power grid. The cost of connecting every house with a landline is a lot higher than dropping thermoelectric generator in each house. Thermoelectric generators have very low efficiency but for isolated houses, they might become really competitive. Our laboratory works in collaboration with plane`te-bois (a non governmental organization) which has developed energy-efficient multifunction (cooking and hot water) stoves based on traditional stoves designs. A prototype of a thermoelectric generator (Bismuth Telluride) has been designed to convert a small part of the energy heating the sanitary water into electricity. This generator can produce up to 10 watts on an adapted load. Storing this energy in a battery is necessary as the cooking stove only works a few hours each day. As the working point of the stove varies a lot during the use it is also necessary to regulate the electrical power. An electric DC DC converter has been developed with a maximum power point tracker (MPPT) in order to have a good efficiency of the electronic part of the thermoelectric generator. The theoretical efficiency of the MMPT converter is discussed. First results obtained with a hot gas generator simulating the exhaust of the combustion chamber of a cooking stove are presented in the paper.
Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems
Modestas Pikutis
2014-05-01
Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.
Practical and theoretical improvements for bipartite matching using the pseudoflow algorithm
Chandran, Bala G
2011-01-01
We show that the pseudoflow algorithm for maximum flow is particularly efficient for the bipartite matching problem both in theory and in practice. We develop several implementations of the pseudoflow algorithm for bipartite matching, and compare them over a wide set of benchmark instances to state-of-the-art implementations of push-relabel and augmenting path algorithms that are specifically designed to solve these problems. The experiments show that the pseudoflow variants are in most cases faster than the other algorithms. We also show that one particular implementation---the matching pseudoflow algorithm---is theoretically efficient. For a graph with $n$ nodes, $m$ arcs, $n_1$ the size of the smaller set in the bipartition, and the maximum matching value $\\kappa \\leq n_1$, the algorithm's complexity given input in the form of adjacency lists is $O(\\min{n_1\\kappa,m} + \\sqrt{\\kappa}\\min{\\kappa^2,m})$. Similar algorithmic ideas are shown to work for an adaptation of Hopcroft and Karp's bipartite matching alg...
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.