Energy Technology Data Exchange (ETDEWEB)
Enrique, J.M.; Duran, E.; Andujar, J.M. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain); Sidrach-de-Cardona, M. [Departamento de Fisica Aplicada, II, Universidad de Malaga (Spain)
2007-01-15
The operating point of a photovoltaic generator that is connected to a load is determined by the intersection point of its characteristic curves. In general, this point is not the same as the generator's maximum power point. This difference means losses in the system performance. DC/DC converters together with maximum power point tracking systems (MPPT) are used to avoid these losses. Different algorithms have been proposed for maximum power point tracking. Nevertheless, the choice of the configuration of the right converter has not been studied so widely, although this choice, as demonstrated in this work, has an important influence in the optimum performance of the photovoltaic system. In this article, we conduct a study of the three basic topologies of DC/DC converters with resistive load connected to photovoltaic modules. This article demonstrates that there is a limitation in the system's performance according to the type of converter used. Two fundamental conclusions are derived from this study: (1) the buck-boost DC/DC converter topology is the only one which allows the follow-up of the PV module maximum power point regardless of temperature, irradiance and connected load and (2) the connection of a buck-boost DC/DC converter in a photovoltaic facility to the panel output could be a good practice to improve performance. (author)
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
THEORETICAL FOUNDATIONS EFFICIENT MASS VALUATION
Directory of Open Access Journals (Sweden)
Koshel A.
2016-08-01
Full Text Available In the article the theoretical basis for determining the effectiveness of mass valuation of land in present-day conditions are described. The concept defenitsy effect and effectiveness as economic categories and their classification values for mass valuation of land are presented. The effectiveness of mass valuation of land in the settlements defines the structure of local budget and economic activities undertaken by local authorities on the basis of the results of the mass appraisal of real estate. Mass valuation is regular and it is characterized by high degree of standardization of procedures and a significant increase in the role of statistical methods for processing data related to the use of the most significant factors influencing the parameters of the object to its cost, as well as the need to determine the value of the objective laws change equivalent, which is only possible when using economic and mathematical methods and statistical analysis. Quality control results of mass valuation carried out principally in other ways, as obtained by applying statistical machine results can be checked as soon statistical methods. This shows the relevance of research topic and lack of elaboration for Ukraine problems of efficiency of mass land valuation. Scientific research conducted through the use of the dialectical method and techniques of abstraction, comparative analysis and synthesis, the article various models and methods of valuation of land for taxation purposes are analyzed. In addition, the group explored methods used, comparisons, and more. In economic theory and practice problems and determine the effect of efficiency aimed at profit is quite relevant. Economists consider cost-effectiveness, such as economic efficiency. In this case, the production can be attributed to the activities to conduct and organization of mass valuation of land. This pushes many different positions on criteria and indicators of economic efficiency, the
Design of a wind turbine rotor for maximum aerodynamic efficiency
DEFF Research Database (Denmark)
Johansen, Jeppe; Aagaard Madsen, Helge; Gaunaa, Mac
2009-01-01
The design of a three-bladed wind turbine rotor is described, where the main focus has been highest possible mechanical power coefficient, CP, at a single operational condition. Structural, as well as off-design, issues are not considered, leading to a purely theoretical design for investigating...... maximum aerodynamic efficiency. The rotor is designed assuming constant induction for most of the blade span, but near the tip region, a constant load is assumed instead. The rotor design is obtained using an actuator disc model, and is subsequently verified using both a free-wake lifting line method...
Emf, maximum power and efficiency of fuel cells
International Nuclear Information System (INIS)
Gaggioli, R.A.; Dunbar, W.R.
1990-01-01
This paper discusses the ideal voltage of steady-flow fuel cells usually expressed by Emf = -ΔG/nF where ΔG is the Gibbs free energy of reaction for the oxidation of the fuel at the supposed temperature of operation of the cell. Furthermore, the ideal power of the cell is expressed as the product of the fuel flow rate with this emf, and the efficiency of a real fuel cell, sometimes called the Gibbs efficiency, is defined as the ratio of the actual power output to this ideal power. Such viewpoints are flawed in several respects. While it is true that if a cell operates isothermally the maximum conceivable work output is equal to the difference between the Gibbs free energy of the incoming reactants and that of the leaving products, nevertheless, even if the cell operates isothermally, the use of the conventional ΔG of reaction assumes that the products of reaction leave separately from one another (and from any unused fuel), and when ΔS of reaction is positive it assumes that a free heat source exists at the operating temperature, whereas if ΔS is negative it neglects the potential power which theoretically could be obtained form the heat released during oxidation. Moreover, the usual cell does not operate isothermally but (virtually) adiabatically
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.; Ito, N.
2013-01-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Theoretical Evaluation of the Maximum Work of Free-Piston Engine Generators
Kojima, Shinji
2017-01-01
Utilizing the adjoint equations that originate from the calculus of variations, we have calculated the maximum thermal efficiency that is theoretically attainable by free-piston engine generators considering the work loss due to friction and Joule heat. Based on the adjoint equations with seven dimensionless parameters, the trajectory of the piston, the histories of the electric current, the work done, and the two kinds of losses have been derived in analytic forms. Using these we have conducted parametric studies for the optimized Otto and Brayton cycles. The smallness of the pressure ratio of the Brayton cycle makes the net work done negative even when the duration of heat addition is optimized to give the maximum amount of heat addition. For the Otto cycle, the net work done is positive, and both types of losses relative to the gross work done become smaller with the larger compression ratio. Another remarkable feature of the optimized Brayton cycle is that the piston trajectory of the heat addition/disposal process is expressed by the same equation as that of an adiabatic process. The maximum thermal efficiency of any combination of isochoric and isobaric heat addition/disposal processes, such as the Sabathe cycle, may be deduced by applying the methods described here.
Theoretical and observational assessments of flare efficiencies
International Nuclear Information System (INIS)
Leahey, D.M.; Preston, K.; Strosher, M.
2000-01-01
During the processing of hydrocarbon materials, gaseous wastes are flared in an effort to completely burn the waste material and therefore leave behind very little by-products. Complete combustion, however is rarely successful because entrainment of air into the region of combusting gases restricts flame sizes to less than optimum values. The resulting flames are often too small to dissipate the amount of heat associated with complete (100 per cent) combustion efficiency. Flaring, therefore, often results in emissions of gases with more complex molecular structures than just carbon dioxide and water. Polycyclic aromatic hydrocarbons and volatile organic compounds which are indicative of incomplete combustion are often associated with flaring. This theoretical study of flame efficiencies was based on the knowledge of the full range of chemical reactions and associated kinetics. In this study, equations developed by Leahey and Schroeder were used to estimate flame lengths, areas and volumes as functions of flare stack exit velocity, stoichiometric mixing ratio and wind speed. This was followed by an estimate of heats released as part of the combustion process. This was derived from the knowledge of the flame dimensions together with an assumed flame temperature of 1200 K. Combustion efficiencies were then obtained by taking the ratio of estimated actual heat release values to those associated with complete combustion. It was concluded that combustion efficiency decreases significantly with wind speed increases from 1 to 6 m/s. After that initial increase, combustion efficiencies level off at values between 10 to 15 per cent. Propane and ethane were found to burn more efficiently than methane or hydrogen sulfide. 24 refs., 4 tabs., 1 fig., 1 append
Theoretical efficiency limits for thermoradiative energy conversion
International Nuclear Information System (INIS)
Strandberg, Rune
2015-01-01
A new method to produce electricity from heat called thermoradiative energy conversion is analyzed. The method is based on sustaining a difference in the chemical potential for electron populations above and below an energy gap and let this difference drive a current through an electric circuit. The difference in chemical potential originates from an imbalance in the excitation and de-excitation of electrons across the energy gap. The method has similarities to thermophotovoltaics and conventional photovoltaics. While photovoltaic cells absorb thermal radiation from a body with higher temperature than the cell itself, thermoradiative cells are hot during operation and emit a net outflow of photons to colder surroundings. A thermoradiative cell with an energy gap of 0.25 eV at a temperature of 500 K in surroundings at 300 K is found to have a theoretical efficiency limit of 33.2%. For a high-temperature thermoradiative cell with an energy gap of 0.4 eV, a theoretical efficiency close to 50% is found while the cell produces 1000 W/m 2 has a temperature of 1000 K and is placed in surroundings with a temperature of 300 K. Some aspects related to the practical implementation of the concept are discussed and some challenges are addressed. It is, for example, obvious that there is an upper boundary for the temperature under which solid state devices can work properly over time. No conclusions are drawn with regard to such practical boundaries, because the work is aimed at establishing upper limits for ideal thermoradiative devices
International Nuclear Information System (INIS)
Kareim, Ameer A; Mansor, Muhamad Bin
2013-01-01
The aim of this paper is to improve efficiency of maximum power point tracking (MPPT) for PV systems. The Support Vector Machine (SVM) was proposed to achieve the MPPT controller. The theoretical, the perturbation and observation (P and O), and incremental conductance (IC) algorithms were used to compare with proposed SVM algorithm. MATLAB models for PV module, theoretical, SVM, P and O, and IC algorithms are implemented. The improved MPPT uses the SVM method to predict the optimum voltage of the PV system in order to extract the maximum power point (MPP). The SVM technique used two inputs which are solar radiation and ambient temperature of the modeled PV module. The results show that the proposed SVM technique has less Root Mean Square Error (RMSE) and higher efficiency than P and O and IC methods.
An Efficient Algorithm for the Maximum Distance Problem
Directory of Open Access Journals (Sweden)
Gabrielle Assunta Grün
2001-12-01
Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.
Theoretical study of rock mass investigation efficiency
International Nuclear Information System (INIS)
Holmen, Johan G.; Outters, Nils
2002-05-01
The study concerns a mathematical modelling of a fractured rock mass and its investigations by use of theoretical boreholes and rock surfaces, with the purpose of analysing the efficiency (precision) of such investigations and determine the amount of investigations necessary to obtain reliable estimations of the structural-geological parameters of the studied rock mass. The study is not about estimating suitable sample sizes to be used in site investigations.The purpose of the study is to analyse the amount of information necessary for deriving estimates of the geological parameters studied, within defined confidence intervals and confidence level In other words, how the confidence in models of the rock mass (considering a selected number of parameters) will change with amount of information collected form boreholes and surfaces. The study is limited to a selected number of geometrical structural-geological parameters: Fracture orientation: mean direction and dispersion (Fisher Kappa and SRI). Different measures of fracture density (P10, P21 and P32). Fracture trace-length and strike distributions as seen on horizontal windows. A numerical Discrete Fracture Network (DFN) was used for representation of a fractured rock mass. The DFN-model was primarily based on the properties of an actual fracture network investigated at the Aespoe Hard Rock Laboratory. The rock mass studied (DFN-model) contained three different fracture sets with different orientations and fracture densities. The rock unit studied was statistically homogeneous. The study includes a limited sensitivity analysis of the properties of the DFN-model. The study is a theoretical and computer-based comparison between samples of fracture properties of a theoretical rock unit and the known true properties of the same unit. The samples are derived from numerically generated boreholes and surfaces that intersect the DFN-network. Two different boreholes are analysed; a vertical borehole and a borehole that is
International Nuclear Information System (INIS)
Wenzel, H; Crump, P; Pietrzak, A; Wang, X; Erbert, G; Traenkle, G
2010-01-01
The factors that limit both the continuous wave (CW) and the pulsed output power of broad-area laser diodes driven at very high currents are investigated theoretically and experimentally. The decrease in the gain due to self-heating under CW operation and spectral holeburning under pulsed operation, as well as heterobarrier carrier leakage and longitudinal spatial holeburning, are the dominant mechanisms limiting the maximum achievable output power.
DEFF Research Database (Denmark)
Bjørk, Rasmus; Nielsen, Kaspar Kirstein
2017-01-01
The maximum efficiency for photovoltaic (PV) and thermoelectric generator (TEG) systems without concentration is investigated. Both a combined system where the TEG is mounted directly on the back of the PV and a tandem system where the incoming sunlight is split, and the short wavelength radiation...
Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors
Energy Technology Data Exchange (ETDEWEB)
Bomble, Yannick J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); St. John, Peter C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Crowley, Michael F [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-10-18
A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.
Maximum herd efficiency in meat production II. The influence of ...
African Journals Online (AJOL)
surface in terms of plots of total efficiency against percentages of mature body .... Dickerson (1978) shows that, for cattle and sheep, the energy .... protein metabolism. ... metric slope b is a scale-free parameter is convenient and .... Simulation.
International Nuclear Information System (INIS)
Fathabadi, Hassan
2016-01-01
Highlights: • Novel sensorless MPPT technique without drawbacks of other sensor/sensorless methods. • Tracking the actual MPP of WECSs, no tracking the MPP of their wind turbines. • Actually extracting the highest output power from WECSs. • Novel MPPT technique having the MPPT efficiency more than 98.5% for WECSs. • Novel MPPT technique having short convergence time for WECSs. - Abstract: In this study, a novel high accurate sensorless maximum power point tracking (MPPT) method is proposed. The technique tracks the actual maximum power point of a wind energy conversion system (WECS) at which maximum output power is extracted from the system, not the maximum power point of its wind turbine at which maximum mechanical power is obtained from the turbine, so it actually extracts the highest output power from the system. The technique only uses input voltage and current of the converter used in the system, and neither needs any speed sensors (anemometer and tachometer) nor has the drawbacks of other sensor/sensorless based MPPT methods. The technique has been implemented as a MPPT controller by constructing a WECS. Theoretical results, the technique performance, and its advantages are validated by presenting real experimental results. The real static-dynamic response of the MPPT controller is experimentally obtained that verifies the proposed MPPT technique high accurately extracts the highest instant power from wind energy conversion systems with the MPPT efficiency of more than 98.5% and a short convergence time that is only 25 s for the constructed system having a total inertia and friction coefficient of 3.93 kg m 2 and 0.014 N m s, respectively.
Theoretical study on device efficiency of pulsed liquid jet pump
International Nuclear Information System (INIS)
Gao Chuanchang; Lu Hongqi; Wang Shicheng; Cheng Mingchuan
2001-01-01
The influence of the main factors on device efficiency of pulsed liquid jet pump with gas-liquid piston is analysed, the theoretical equation and its time-averaged solution of pulsed liquid jet pump device efficiency are derived. The theoretical and experimental results show that the efficiency of transmission of energy and mass to use pulsed jet is greatly raised, compared with steady jet, in the same device of liquid jet pump. The calculating results of time-averaged efficiency of pulsed liquid jet pump are approximately in agreement with the experimental results in our and foreign countries
Maximum herd efficiency in meat production I. Optima for slaughter ...
African Journals Online (AJOL)
Profit rate for a meat production enterprise can be decomposedinto the unit price for meat and herd ... supply and demand, whereas breeding improvement is gen- ... Herd efficiency is total live mass for slaughter divided by costs .... tenance and above-maintenance components by Dickerson, and ..... Growth and productivity.
Ouerdane, H.; Apertet, Y.; Goupil, C.; Lecoeur, Ph.
2015-07-01
Classical equilibrium thermodynamics is a theory of principles, which was built from empirical knowledge and debates on the nature and the use of heat as a means to produce motive power. By the beginning of the 20th century, the principles of thermodynamics were summarized into the so-called four laws, which were, as it turns out, definitive negative answers to the doomed quests for perpetual motion machines. As a matter of fact, one result of Sadi Carnot's work was precisely that the heat-to-work conversion process is fundamentally limited; as such, it is considered as a first version of the second law of thermodynamics. Although it was derived from Carnot's unrealistic model, the upper bound on the thermodynamic conversion efficiency, known as the Carnot efficiency, became a paradigm as the next target after the failure of the perpetual motion ideal. In the 1950's, Jacques Yvon published a conference paper containing the necessary ingredients for a new class of models, and even a formula, not so different from that of Carnot's efficiency, which later would become the new efficiency reference. Yvon's first analysis of a model of engine producing power, connected to heat source and sink through heat exchangers, went fairly unnoticed for twenty years, until Frank Curzon and Boye Ahlborn published their pedagogical paper about the effect of finite heat transfer on output power limitation and their derivation of the efficiency at maximum power, now mostly known as the Curzon-Ahlborn (CA) efficiency. The notion of finite rate explicitly introduced time in thermodynamics, and its significance cannot be overlooked as shown by the wealth of works devoted to what is now known as finite-time thermodynamics since the end of the 1970's. The favorable comparison of the CA efficiency to actual values led many to consider it as a universal upper bound for real heat engines, but things are not so straightforward that a simple formula may account for a variety of situations. The
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Combustion phasing for maximum efficiency for conventional and high efficiency engines
International Nuclear Information System (INIS)
Caton, Jerald A.
2014-01-01
Highlights: • Combustion phasing for max efficiency is a function of engine parameters. • Combustion phasing is most affected by heat transfer, compression ratio, burn duration. • Combustion phasing is less affected by speed, load, equivalence ratio and EGR. • Combustion phasing for a high efficiency engine was more advanced. • Exergy destruction during combustion as functions of combustion phasing is reported. - Abstract: The importance of the phasing of the combustion event for internal-combustion engines is well appreciated, but quantitative details are sparse. The objective of the current work was to examine the optimum combustion phasing (based on maximum bmep) as functions of engine design and operating variables. A thermodynamic, engine cycle simulation was used to complete this assessment. As metrics for the combustion phasing, both the crank angle for 50% fuel mass burned (CA 50 ) and the crank angle for peak pressure (CA pp ) are reported as functions of the engine variables. In contrast to common statements in the literature, the optimum CA 50 and CA pp vary depending on the design and operating variables. Optimum, as used in this paper, refers to the combustion timing that provides the maximum bmep and brake thermal efficiency (MBT timing). For this work, the variables with the greatest influence on the optimum CA 50 and CA pp were the heat transfer level, the burn duration and the compression ratio. Other variables such as equivalence ratio, EGR level, engine speed and engine load had a much smaller impact on the optimum CA 50 and CA pp . For the conventional engine, for the conditions examined, the optimum CA 50 varied between about 5 and 11°aTDC, and the optimum CA pp varied between about 9 and 16°aTDC. For a high efficiency engine (high dilution, high compression ratio), the optimum CA 50 was 2.5°aTDC, and the optimum CA pp was 7.8°aTDC. These more advanced values for the optimum CA 50 and CA pp for the high efficiency engine were
Li, Zijian
2018-08-01
To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Inform: Efficient Information-Theoretic Analysis of Collective Behaviors
Directory of Open Access Journals (Sweden)
Douglas G. Moore
2018-06-01
Full Text Available The study of collective behavior has traditionally relied on a variety of different methodological tools ranging from more theoretical methods such as population or game-theoretic models to empirical ones like Monte Carlo or multi-agent simulations. An approach that is increasingly being explored is the use of information theory as a methodological framework to study the flow of information and the statistical properties of collectives of interacting agents. While a few general purpose toolkits exist, most of the existing software for information theoretic analysis of collective systems is limited in scope. We introduce Inform, an open-source framework for efficient information theoretic analysis that exploits the computational power of a C library while simplifying its use through a variety of wrappers for common higher-level scripting languages. We focus on two such wrappers here: PyInform (Python and rinform (R. Inform and its wrappers are cross-platform and general-purpose. They include classical information-theoretic measures, measures of information dynamics and information-based methods to study the statistical behavior of collective systems, and expose a lower-level API that allow users to construct measures of their own. We describe the architecture of the Inform framework, study its computational efficiency and use it to analyze three different case studies of collective behavior: biochemical information storage in regenerating planaria, nest-site selection in the ant Temnothorax rugatulus, and collective decision making in multi-agent simulations.
International Nuclear Information System (INIS)
Kempf, Nicholas; Zhang, Yanliang
2016-01-01
Highlights: • A three-dimensional automotive thermoelectric generator (TEG) model is developed. • Heat exchanger design and TEG configuration are optimized for maximum fuel efficiency increase. • Heat exchanger conductivity has a strong influence on maximum fuel efficiency increase. • TEG aspect ratio and fin height increase with heat exchanger thermal conductivity. • A 2.5% fuel efficiency increase is attainable with nanostructured half-Heusler modules. - Abstract: Automotive fuel efficiency can be increased by thermoelectric power generation using exhaust waste heat. A high-temperature thermoelectric generator (TEG) that converts engine exhaust waste heat into electricity is simulated based on a light-duty passenger vehicle with a 4-cylinder gasoline engine. Strategies to optimize TEG configuration and heat exchanger design for maximum fuel efficiency improvement are provided. Through comparison of stainless steel and silicon carbide heat exchangers, it is found that both the optimal TEG design and the maximum fuel efficiency increase are highly dependent on the thermal conductivity of the heat exchanger material. Significantly higher fuel efficiency increase can be obtained using silicon carbide heat exchangers at taller fins and a longer TEG along the exhaust flow direction when compared to stainless steel heat exchangers. Accounting for major parasitic losses, a maximum fuel efficiency increase of 2.5% is achievable using newly developed nanostructured bulk half-Heusler thermoelectric modules.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.
Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer
Directory of Open Access Journals (Sweden)
Bo-Hee Choi
2016-01-01
Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.
Directory of Open Access Journals (Sweden)
Axel Kleidon
2010-03-01
Full Text Available The Maximum Entropy Production (MEP principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the already established framework of non-equilibrium thermodynamics, with the assumption of local thermodynamic equilibrium at the appropriate scales.
Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.
2017-03-01
Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.
Theoretical considerations on maximum running speeds for large and small animals.
Fuentes, Mauricio A
2016-02-07
Mechanical equations for fast running speeds are presented and analyzed. One of the equations and its associated model predict that animals tend to experience larger mechanical stresses in their limbs (muscles, tendons and bones) as a result of larger stride lengths, suggesting a structural restriction entailing the existence of an absolute maximum possible stride length. The consequence for big animals is that an increasingly larger body mass implies decreasing maximal speeds, given that the stride frequency generally decreases for increasingly larger animals. Another restriction, acting on small animals, is discussed only in preliminary terms, but it seems safe to assume from previous studies that for a given range of body masses of small animals, those which are bigger are faster. The difference between speed scaling trends for large and small animals implies the existence of a range of intermediate body masses corresponding to the fastest animals. Copyright © 2015 Elsevier Ltd. All rights reserved.
Parametric characteristics of a solar thermophotovoltaic system at the maximum efficiency
International Nuclear Information System (INIS)
Liao, Tianjun; Chen, Xiaohang; Yang, Zhimin; Lin, Bihong; Chen, Jincan
2016-01-01
Graphical abstract: A model of the far-field TPVC driven by solar energy, which consists of an optical concentrator, an absorber, an emitter, and a PV cell and is simply referred as to the far-field STPVS. - Highlights: • A model of the far-field solar thermophotovoltaic system (STPVS) is established. • External and internal irreversible losses are considered. • The maximum efficiency of the STPVS is calculated. • Optimal values of key parameters at the maximum efficiency are determined. • Effects of the concentrator factor on the performance of the system are discussed. - Abstract: A model of the solar thermophotovoltaic system (STPVS) consisting of an optical concentrator, a thermal absorber, an emitter, and a photovoltaic (PV) cell is proposed, where the far-field thermal emission between the emitter and the PV cell, the radiation losses from the absorber and emitter to the environment, the reflected loss from the absorber, and the finite-rate heat exchange between the PV cell and the environment are taken into account. Analytical expressions for the power output of and overall efficiency of the STPVS are derived. By solving thermal equilibrium equations, the operating temperatures of the emitter and PV cell are determined and the maximum efficiency of the system is calculated numerically for given values of the output voltage of the PV cell and the ratio of the front surface area of the absorber to that of the emitter. For different bandgaps, the maximum efficiencies of the system are calculated and the corresponding optimum values of several operating parameters are obtained. The effects of the concentrator factor on the optimum performance of the system are also discussed.
Energy-Efficient Algorithm for Sensor Networks with Non-Uniform Maximum Transmission Range
Directory of Open Access Journals (Sweden)
Yimin Yu
2011-06-01
Full Text Available In wireless sensor networks (WSNs, the energy hole problem is a key factor affecting the network lifetime. In a circular multi-hop sensor network (modeled as concentric coronas, the optimal transmission ranges of all coronas can effectively improve network lifetime. In this paper, we investigate WSNs with non-uniform maximum transmission ranges, where sensor nodes deployed in different regions may differ in their maximum transmission range. Then, we propose an Energy-efficient algorithm for Non-uniform Maximum Transmission range (ENMT, which can search approximate optimal transmission ranges of all coronas in order to prolong network lifetime. Furthermore, the simulation results indicate that ENMT performs better than other algorithms.
Toward Improved Rotor-Only Axial Fans—Part II: Design Optimization for Maximum Efficiency
DEFF Research Database (Denmark)
Sørensen, Dan Nørtoft; Thompson, M. C.; Sørensen, Jens Nørkær
2000-01-01
Numerical design optimization of the aerodynamic performance of axial fans is carried out, maximizing the efficiency in a designinterval of flow rates. Tip radius, number of blades, and angular velocity of the rotor are fixed, whereas the hub radius andspanwise distributions of chord length......, stagger angle, and camber angle are varied to find the optimum rotor geometry.Constraints ensure a pressure rise above a specified target and an angle of attack on the blades below stall. The optimizationscheme is used to investigate the dependence of maximum efficiency on the width of the design interval...
International Nuclear Information System (INIS)
Dong, Qingchun; Liao, Tianjun; Yang, Zhimin; Chen, Xiaohang; Chen, Jincan
2017-01-01
Graphical abstract: The overall model of the solar thermophotovoltaic cell (STPVC) composed of an optical lens, an absorber, an emitter, and a photovoltaic (PV) cell with an integrated back-side reflector is updated to include various irreversible losses. - Highlights: • A new model of the irreversible solar thermophotovoltaic system is proposed. • The material and structure parameters of the system are considered. • The performance characteristics at the maximum efficiency are revealed. • The optimal values of key parameters are determined. • The system can obtain a large efficiency under a relative low concentration ratio. - Abstract: The overall model of the solar thermophotovoltaic cell (STPVC) composed of an optical lens, an absorber, an emitter, and a photovoltaic (PV) cell with an integrated back-side reflector is updated to include various irreversible losses. The power output and efficiency of the cell are analytically derived. The performance characteristics of the STPVC at the maximum efficiency are revealed. The optimum values of several important parameters, such as the voltage output of the PV cell, the area ratio of the absorber to the emitter, and the band-gap of the semiconductor material, are determined. It is found that under the condition of a relative low concentration ratio, the optimally designed STPVC can obtain a relative large efficiency.
Dang Chien, Nguyen; Shih, Chun-Hsing; Hoa, Phu Chi; Minh, Nguyen Hong; Thi Thanh Hien, Duong; Nhung, Le Hong
2016-06-01
The two-band Kane model has been popularly used to calculate the band-to-band tunneling (BTBT) current in tunnel field-effect transistor (TFET) which is currently considered as a promising candidate for low power applications. This study theoretically clarifies the maximum electric field approximation (MEFA) of direct BTBT Kane model and evaluates its appropriateness for low bandgap semiconductors. By analysing the physical origin of each electric field term in the Kane model, it has been elucidated in the MEFA that the local electric field term must be remained while the nonlocal electric field terms are assigned by the maximum value of electric field at the tunnel junction. Mathematical investigations have showed that the MEFA is more appropriate for low bandgap semiconductors compared to high bandgap materials because of enhanced tunneling probability in low field regions. The appropriateness of the MEFA is very useful for practical uses in quickly estimating the direct BTBT current in low bandgap TFET devices.
Iyyappan, I.; Ponmurugan, M.
2018-03-01
A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Maximum Efficiency per Torque Control of Permanent-Magnet Synchronous Machines
Directory of Open Access Journals (Sweden)
Qingbo Guo
2016-12-01
Full Text Available High-efficiency permanent-magnet synchronous machine (PMSM drive systems need not only optimally designed motors but also efficiency-oriented control strategies. However, the existing control strategies only focus on partial loss optimization. This paper proposes a novel analytic loss model of PMSM in either sine-wave pulse-width modulation (SPWM or space vector pulse width modulation (SVPWM which can take into account both the fundamental loss and harmonic loss. The fundamental loss is divided into fundamental copper loss and fundamental iron loss which is estimated by the average flux density in the stator tooth and yoke. In addition, the harmonic loss is obtained from the Bertotti iron loss formula by the harmonic voltages of the three-phase inverter in either SPWM or SVPWM which are calculated by double Fourier integral analysis. Based on the analytic loss model, this paper proposes a maximum efficiency per torque (MEPT control strategy which can minimize the electromagnetic loss of PMSM in the whole operation range. As the loss model of PMSM is too complicated to obtain the analytical solution of optimal loss, a golden section method is applied to achieve the optimal operation point accurately, which can make PMSM work at maximum efficiency. The optimized results between SPWM and SVPWM show that the MEPT in SVPWM has a better effect on the optimization performance. Both the theory analysis and experiment results show that the MEPT control can significantly improve the efficiency performance of the PMSM in each operation condition with a satisfied dynamic performance.
DEFF Research Database (Denmark)
Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl
2010-01-01
analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value......Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stations and mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log...
Efficiency of Photovoltaic Maximum Power Point Tracking Controller Based on a Fuzzy Logic
Directory of Open Access Journals (Sweden)
Ammar Al-Gizi
2017-07-01
Full Text Available This paper examines the efficiency of a fuzzy logic control (FLC based maximum power point tracking (MPPT of a photovoltaic (PV system under variable climate conditions and connected load requirements. The PV system including a PV module BP SX150S, buck-boost DC-DC converter, MPPT, and a resistive load is modeled and simulated using Matlab/Simulink package. In order to compare the performance of FLC-based MPPT controller with the conventional perturb and observe (P&O method at different irradiation (G, temperature (T and connected load (RL variations – rising time (tr, recovering time, total average power and MPPT efficiency topics are calculated. The simulation results show that the FLC-based MPPT method can quickly track the maximum power point (MPP of the PV module at the transient state and effectively eliminates the power oscillation around the MPP of the PV module at steady state, hence more average power can be extracted, in comparison with the conventional P&O method.
Efficient Photovoltaic System Maximum Power Point Tracking Using a New Technique
Directory of Open Access Journals (Sweden)
Mehdi Seyedmahmoudian
2016-03-01
Full Text Available Partial shading is an unavoidable condition which significantly reduces the efficiency and stability of a photovoltaic (PV system. When partial shading occurs the system has multiple-peak output power characteristics. In order to track the global maximum power point (GMPP within an appropriate period a reliable technique is required. Conventional techniques such as hill climbing and perturbation and observation (P&O are inadequate in tracking the GMPP subject to this condition resulting in a dramatic reduction in the efficiency of the PV system. Recent artificial intelligence methods have been proposed, however they have a higher computational cost, slower processing time and increased oscillations which results in further instability at the output of the PV system. This paper proposes a fast and efficient technique based on Radial Movement Optimization (RMO for detecting the GMPP under partial shading conditions. The paper begins with a brief description of the behavior of PV systems under partial shading conditions followed by the introduction of the new RMO-based technique for GMPP tracking. Finally, results are presented to demonstration the performance of the proposed technique under different partial shading conditions. The results are compared with those of the PSO method, one of the most widely used methods in the literature. Four factors, namely convergence speed, efficiency (power loss reduction, stability (oscillation reduction and computational cost, are considered in the comparison with the PSO technique.
Quamruzzaman, M.; Mohammad, Nur; Matin, M. A.; Alam, M. R.
2016-10-01
Solar photovoltaics (PVs) have nonlinear voltage-current characteristics, with a distinct maximum power point (MPP) depending on factors such as solar irradiance and operating temperature. To extract maximum power from the PV array at any environmental condition, DC-DC converters are usually used as MPP trackers. This paper presents the performance analysis of a coupled inductor single-ended primary inductance converter for maximum power point tracking (MPPT) in a PV system. A detailed model of the system has been designed and developed in MATLAB/Simulink. The performance evaluation has been conducted on the basis of stability, current ripple reduction and efficiency at different operating conditions. Simulation results show considerable ripple reduction in the input and output currents of the converter. Both the MPPT and converter efficiencies are significantly improved. The obtained simulation results validate the effectiveness and suitability of the converter model in MPPT and show reasonable agreement with the theoretical analysis.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Simulation of maximum light use efficiency for some typical vegetation types in China
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Maximum light use efficiency (εmax) is a key parameter for the estimation of net primary productivity (NPP) derived from remote sensing data. There are still many divergences about its value for each vegetation type. The εmax for some typical vegetation types in China is simulated using a modified least squares function based on NOAA/AVHRR remote sensing data and field-observed NPP data. The vegetation classification accuracy is introduced to the process. The sensitivity analysis of εmax to vegetation classification accuracy is also conducted. The results show that the simulated values of εmax are greater than the value used in CASA model, and less than the values simulated with BIOME-BGC model. This is consistent with some other studies. The relative error of εmax resulting from classification accuracy is -5.5%―8.0%. This indicates that the simulated values of εmax are reliable and stable.
Maximum Efficiency of Thermoelectric Heat Conversion in High-Temperature Power Devices
Directory of Open Access Journals (Sweden)
V. I. Khvesyuk
2016-01-01
Full Text Available Modern trends in development of aircraft engineering go with development of vehicles of the fifth generation. The features of aircrafts of the fifth generation are motivation to use new high-performance systems of onboard power supply. The operating temperature of the outer walls of engines is of 800–1000 K. This corresponds to radiation heat flux of 10 kW/m2 . The thermal energy including radiation of the engine wall may potentially be converted into electricity. The main objective of this paper is to analyze if it is possible to use a high efficiency thermoelectric conversion of heat into electricity. The paper considers issues such as working processes, choice of materials, and optimization of thermoelectric conversion. It presents the analysis results of operating conditions of thermoelectric generator (TEG used in advanced hightemperature power devices. A high-temperature heat source is a favorable factor for the thermoelectric conversion of heat. It is shown that for existing thermoelectric materials a theoretical conversion efficiency can reach the level of 15–20% at temperatures up to 1500 K and available values of Ioffe parameter being ZT = 2–3 (Z is figure of merit, T is temperature. To ensure temperature regime and high efficiency thermoelectric conversion simultaneously it is necessary to have a certain match between TEG power, temperature of hot and cold surfaces, and heat transfer coefficient of the cooling system. The paper discusses a concept of radiation absorber on the TEG hot surface. The analysis has demonstrated a number of potentialities for highly efficient conversion through using the TEG in high-temperature power devices. This work has been implemented under support of the Ministry of Education and Science of the Russian Federation; project No. 1145 (the programme “Organization of Research Engineering Activities”.
Efficiency of working memory: Theoretical concept and practical application
Lalović Dejan
2008-01-01
Efficiency of working memory is the concept which connects psychology of memory with different fields of cognitive, differential and applied psychology. In this paper, the history of interest for the assessment of the capacity of short-term memory is presented in brief, as well as the different methods used nowadays to assess the individual differences in the efficiency of working memory. What follows is the consideration of studies that indicate the existence of significant links between the...
An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.
Yoon, Yourim; Kim, Yong-Hyuk
2013-10-01
Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.
Efficiency of working memory: Theoretical concept and practical application
Directory of Open Access Journals (Sweden)
Lalović Dejan
2008-01-01
Full Text Available Efficiency of working memory is the concept which connects psychology of memory with different fields of cognitive, differential and applied psychology. In this paper, the history of interest for the assessment of the capacity of short-term memory is presented in brief, as well as the different methods used nowadays to assess the individual differences in the efficiency of working memory. What follows is the consideration of studies that indicate the existence of significant links between the efficiency of working memory and general intelligence, the ability of reasoning, personality variables, as well as some socio-psychological phenomena. Special emphasis is placed on the links between the efficiency of working memory and certain aspects of pedagogical practice: acquiring the skill of reading, learning arithmetic and shedding light on the cause of general failure in learning at school. What is also provided are the suggestions that, in the light of knowledge about the development and limitations of working memory at school age, can be useful for teaching practice.
Near Theoretical Gigabit Link Efficiency for Distributed Data Acquisition Systems.
Abu-Nimeh, Faisal T; Choong, Woon-Seng
2017-03-01
Link efficiency, data integrity, and continuity for high-throughput and real-time systems is crucial. Most of these applications require specialized hardware and operating systems as well as extensive tuning in order to achieve high efficiency. Here, we present an implementation of gigabit Ethernet data streaming which can achieve 99.26% link efficiency while maintaining no packet losses. The design and implementation are built on OpenPET, an opensource data acquisition platform for nuclear medical imaging, where (a) a crate hosting multiple OpenPET detector boards uses a User Datagram Protocol over Internet Protocol (UDP/IP) Ethernet soft-core, that is capable of understanding PAUSE frames, to stream data out to a computer workstation; (b) the receiving computer uses Netmap to allow the processing software (i.e., user space), which is written in Python, to directly receive and manage the network card's ring buffers, bypassing the operating system kernel's networking stack; and (c) a multi-threaded application using synchronized queues is implemented in the processing software (Python) to free up the ring buffers as quickly as possible while preserving data integrity and flow continuity.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
International Nuclear Information System (INIS)
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government's interest is approximately 78% and CUSA's interest is approximately 22%. The government's interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS)
Directory of Open Access Journals (Sweden)
Jacob N. Chung
2014-01-01
Full Text Available Two concept systems that are based on the thermochemical process of high-temperature steam gasification of lignocellulosic biomass and municipal solid waste are introduced. The primary objectives of the concept systems are 1 to develop the best scientific, engineering, and technology solutions for converting lignocellulosic biomass, as well as agricultural, forest and municipal waste to clean energy (pure hydrogen fuel, and 2 to minimize water consumption and detrimental impacts of energy production on the environment (air pollution and global warming. The production of superheated steam is by hydrogen combustion using recycled hydrogen produced in the first concept system while in the second concept system concentrated solar energy is used for the steam production. A membrane reactor that performs the hydrogen separation and water gas shift reaction is involved in both systems for producing more pure hydrogen and CO2 sequestration. Based on obtaining the maximum hydrogen production rate the hydrogen recycled ratio is around 20% for the hydrogen combustion steam heating system. Combined with pure hydrogen production, both high temperature steam gasification systems potentially possess more than 80% in first law overall system thermodynamic efficiencies.
Energy Technology Data Exchange (ETDEWEB)
Chung, J. N., E-mail: jnchung@ufl.edu [Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL (United States)
2014-01-02
Two concept systems that are based on the thermochemical process of high temperature steam gasification of lignocellulosic biomass and municipal solid waste are introduced. The primary objectives of the concept systems are (1) to develop the best scientific, engineering, and technology solutions for converting lignocellulosic biomass, as well as agricultural, forest, and municipal waste to clean energy (pure hydrogen fuel), and (2) to minimize water consumption and detrimental impacts of energy production on the environment (air pollution and global warming). The production of superheated steam is by hydrogen combustion using recycled hydrogen produced in the first concept system while in the second concept system concentrated solar energy is used for the steam production. A membrane reactor that performs the hydrogen separation and water gas shift reaction is involved in both systems for producing more pure hydrogen and CO{sub 2} sequestration. Based on obtaining the maximum hydrogen production rate the hydrogen recycled ratio is around 20% for the hydrogen combustion steam heating system. Combined with pure hydrogen production, both high temperature steam gasification systems potentially possess more than 80% in first law overall system thermodynamic efficiencies.
Theoretical Grounds of Formation of the Efficient State Economic Policy
Directory of Open Access Journals (Sweden)
Semyrak Oksana S.
2013-12-01
Full Text Available The article conducts historical and analytical analysis of views on the role of state administration in the sphere of economic relations by various economic directions in order to allocate traditional and newest essential reference points of the modern theory of state regulation of economy. It identifies specific features of modern models of economic policy that envisage setting goals by the state, selection of relevant efficient tools and mathematic function, which would describe dependencies between them. It considers the concept of the basic theory of economic policy of Jan Tinbergen, its advantages and shortcomings. It studies prerequisites and conducts analysis of the modern concept of the role of state in economy as a subject of the market. It considers the modern concept of economic socio-dynamics, pursuant to which the main task of the state is maximisation of social usefulness and permanent improvement of the Pareto-optimal. It considers the “socio-dynamic multiplicator” notion, which envisages availability of three main components: social effect from activity of the state, yearning of individuals for creation of something new and availability of formal and informal institutions that united first two elements.
Jiamjitrpanich, Waraporn; Parkpian, Preeda; Polprasert, Chongrak; Laurent, François; Kosanlavit, Rachain
2012-01-01
This study was designed to compare the initial method for phytoremediation involving germination and transplantation. The study was also to determine the tolerance efficiency of Panicum maximum (Purple guinea grass) and Helianthus annuus (Sunflower) in TNT-contaminated soil and nZVI-contaminated soil. It was found that the transplantation of Panicum maximum and Helianthus annuus was more suitable than germination as the initiate method of nano-phytoremediation potting test. The study also showed that Panicum maximum was more tolerance than Helianthus annuus in TNT and nZVI-contaminated soil. Therefore, Panicum maximum in the transplantation method should be selected as a hyperaccumulated plant for nano-phytoremediation potting tests. Maximum tolerance dosage of Panicum maximum to TNT-concentration soil was 320 mg/kg and nZVI-contaminated soil was 1000 mg/kg in the transplantation method.
Directory of Open Access Journals (Sweden)
Adzhavenko Maryna M.
2014-02-01
Full Text Available Modern economic conditions put a new problem in front of scientists, namely: capability of an enterprise to survive in the unfavourable external environment. This problem is a system and complex one and its solution is within the plane of management of capital, personnel, development, efficiency, etc. The article marks out that efficiency is a corner stone of the modern economic science, which justifies studies of the gnoseological essence of the efficiency category. The main goal of the article lies in the study of scientific and theoretical grounds of formation of the enterprise development efficiency under modern conditions of the changing internal and external environments. The other goals of the article are identification of the essence of the development efficiency category, deepening the theoretical foundation of assessment of efficiency of enterprise development in the modern economic science. The article conducts an ontological analysis of the essence and goals of the enterprise development efficiency notion, studies evolution of scientific approaches and systemises theoretical provisions of the specified category and their assessment in the economic science. In the result of the study the article identifies a new vector of theoretical grounds and dominating logic of formation of the methodology of assessment of efficiency of enterprises under conditions of innovation development of the state, namely: it underlines principles of systemacy, complexity, self-organisation, significance of human capital as an important factor of increase of efficiency and development. Development of methodological grounds of assessment of efficiency of enterprise innovation development is a prospective direction of further studies.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
International Nuclear Information System (INIS)
Laurence, T.; Chromy, B.
2010-01-01
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Theoretical analysis of the switching efficiency of a grating-based laser beam modulator
International Nuclear Information System (INIS)
Ramachandran, V.
1983-03-01
A theoretical interpretation of the digital beam deflection efficiency of an electro-optic modulator is described. Calculated switching voltages are in good agreement with the experimentally observed values. The computed percentage efficiencies to three successive positions are 57, 48 and 43, respectively. (author)
Shin, Hyun; Lee, Sunghun; Kim, Kwon-Hyeon; Moon, Chang-Ki; Yoo, Seung-Jun; Lee, Jeong-Hwan; Kim, Jang-Joo
2014-07-16
A high-efficiency blue-emitting organic light-emitting diode (OLED) approaching theoretical efficiency using an exciplex-forming co-host composed of N,N'-dicarbazolyl-3,5-benzene (mCP) and bis-4,6-(3,5-di-3-pyridylphenyl)- 2-methylpyrimidine (B3PYMPM) is fabricated. Iridium(III)bis[(4,6-difluorophenyl)- pyridinato-N,C2']picolinate (FIrpic) is used as the emitter, which turns out to have a preferred horizontal dipole orientation in the emitting layer. The OLED shows a maximum external quantum efficiency of 29.5% (a maximum current efficiency of 62.2 cd A(-1) ), which is in perfect agreement with the theoretical prediction. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DEFF Research Database (Denmark)
Li, Yonghui; Wu, Qiuwei; Zhu, Haiyu
2015-01-01
efficiency operation obtained at different active power output levels, a hierarchical load tracking control scheme for the grid-connected SOFC was proposed to realize the maximum electrical efficiency operation with the stack temperature bounded. The hierarchical control scheme consists of a fast active...... power control and a slower stack temperature control. The active power control was developed by using a decentralized control method. The efficiency of the proposed hierarchical control scheme was demonstrated by case studies using the benchmark SOFC dynamic model......Based on the benchmark solid oxide fuel cell (SOFC) dynamic model for power system studies and the analysis of the SOFC operating conditions, the nonlinear programming (NLP) optimization method was used to determine the maximum electrical efficiency of the grid-connected SOFC subject...
Bergboer, N.H.; Verdult, V.; Verhaegen, M.H.G.
2002-01-01
We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting
DEFF Research Database (Denmark)
Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit
2014-01-01
Much attention is focused on increasing the energy efficiency to decrease fuel costs and CO2 emissions throughout industrial sectors. The ORC (organic Rankine cycle) is a relatively simple but efficient process that can be used for this purpose by converting low and medium temperature waste heat ...
A theoretical model for prediction of deposition efficiency in cold spraying
International Nuclear Information System (INIS)
Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.
2005-01-01
The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle
Search for the maximum efficiency of a ribbed-surfaces device, providing a tight seal
International Nuclear Information System (INIS)
Boutin, Jeanne.
1977-04-01
The purpose of this experiment was to determine the geometrical characteristics of ribbed surfaces used to equip devices in translation or slow rotation motion and having to form an acceptable seal between slightly viscous fluids. It systematically studies the pressure loss coefficient lambda in function of the different parameters setting the form of ribs and their relative position on the opposite sides. It shows that the passages with two ribbed surfaces lead to highly better results than those with only one, the maximum value of lambda, equal to 0.5, being obtained with the ratios: pitch/clearance = 5, depth of groove/clearance = 1,2, and with their teeth face to face on the two opposite ribbed surfaces. With certain shapes, alternate position of ribs can lead to the maximum of lambda yet lower than 0.5 [fr
DEFF Research Database (Denmark)
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik
2016-01-01
This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...
International Nuclear Information System (INIS)
Bizon, Nicu
2014-01-01
Highlights: • The Maximum Efficiency Point (MEP) is tracked based on air flow rate. • The proposed Extremum Seeking (ES) control assures high performances. • About 10 kW/s search speed and 99.99% stationary accuracy can be obtained. • The energy efficiency increases with 3–12%, according to the power losses. • The control strategy is robust based on self-optimizing ES scheme proposed. - Abstract: An advanced control of the air compressor for the Proton Exchange Membrane Fuel Cell (PEMFC) system is proposed in this paper based on Extremum Seeking (ES) control scheme. The FC net power is mainly depended on the air and hydrogen flow rate and pressure, and heat and water management. This paper proposes to compute the optimal value for the air flow rate based on the advanced ES control scheme in order to maximize the FC net power. In this way, the Maximum Efficiency Point (MEP) will be tracked in real time, with about 10 kW/s search speed and a stationary accuracy of 0.99. Thus, energy efficiency will be close to the maximum value that can be obtained for a given PEMFC stack and compressor group under dynamic load. It is shown that the MEP tracking allows an increasing of the FC net power with 3–12%, depending on the percentage of the FC power supplied to the compressor and the level of the load power. Simulations shows that the performances mentioned above are effective
Directory of Open Access Journals (Sweden)
Yonghui Li
2015-03-01
Full Text Available Based on the benchmark solid oxide fuel cell (SOFC dynamic model for power system studies and the analysis of the SOFC operating conditions, the nonlinear programming (NLP optimization method was used to determine the maximum electrical efficiency of the grid-connected SOFC subject to the constraints of fuel utilization factor, stack temperature and output active power. The optimal operating conditions of the grid-connected SOFC were obtained by solving the NLP problem considering the power consumed by the air compressor. With the optimal operating conditions of the SOFC for the maximum efficiency operation obtained at different active power output levels, a hierarchical load tracking control scheme for the grid-connected SOFC was proposed to realize the maximum electrical efficiency operation with the stack temperature bounded. The hierarchical control scheme consists of a fast active power control and a slower stack temperature control. The active power control was developed by using a decentralized control method. The efficiency of the proposed hierarchical control scheme was demonstrated by case studies using the benchmark SOFC dynamic model.
Dutta, Rohan; Ghosh, Parthasarathi; Chowdhury, Kanchan
2017-12-01
Diverse power generation sector requires energy storage due to penetration of variable renewable energy sources and use of CO2 capture plants with fossil fuel based power plants. Cryogenic energy storage being large-scale, decoupled system with capability of producing large power in the range of MWs is one of the options. The drawback of these systems is low turnaround efficiencies due to liquefaction processes being highly energy intensive. In this paper, the scopes of improving the turnaround efficiency of such a plant based on liquid Nitrogen were identified and some of them were addressed. A method using multiple stages of reheat and expansion was proposed for improved turnaround efficiency from 22% to 47% using four such stages in the cycle. The novelty here is the application of reheating in a cryogenic system and utilization of waste heat for that purpose. Based on the study, process conditions for a laboratory-scale setup were determined and presented here.
INVESTIGATION OF VEHICLE WHEEL ROLLING WITH MAXIMUM EFFICIENCY IN THE BRAKE MODE
Directory of Open Access Journals (Sweden)
D. Leontev
2011-01-01
Full Text Available Up-to-date vehicles are equipped by various systems of braking effort automatic control theparameters calculation of which do not as a rule have a rational solution. In order to increase theworking efficiency of such systems it is necessary to have the data concerning the impact of variousoperational factors on processes occurring at braking of the object of adjustment (vehicle wheel.Data availability concerning the impact of operational factors allows to decrease geometricalparameters of adjustment devices (modulators and maintain their efficient operation under variousexploitation conditions of vehicle’s motion.
Maximum efficiency of wind turbine rotors using Joukowsky and Betz approaches
DEFF Research Database (Denmark)
Okulov, Valery; Sørensen, Jens Nørkær
2010-01-01
On the basis of the concepts outlined by Joukowsky nearly a century ago, an analytical aerodynamic optimization model is developed for rotors with a finite number of blades and constant circulation distribution. In the paper, we show the basics of the new model and compare its efficiency with res......On the basis of the concepts outlined by Joukowsky nearly a century ago, an analytical aerodynamic optimization model is developed for rotors with a finite number of blades and constant circulation distribution. In the paper, we show the basics of the new model and compare its efficiency...
Determination of the Maximum Aerodynamic Efficiency of Wind Turbine Rotors with Winglets
International Nuclear Information System (INIS)
Gaunaa, Mac; Johansen, Jeppe
2007-01-01
The present work contains theoretical considerations and computational results on the nature of using winglets on wind turbines. The theoretical results presented show that the power augmentation obtainable with winglets is due to a reduction of tip-effects, and is not, as believed up to now, caused by the downwind vorticity shift due to downwind winglets. The numerical work includes optimization of the power coefficient for a given tip speed ratio and geometry of the span using a newly developed free wake lifting line code, which takes into account also viscous effects and self induced forces. Validation of the new code with CFD results for a rotor without winglets showed very good agreement. Results from the new code with winglets indicate that downwind winglets are superior to upwind ones with respect to optimization of Cp, and that the increase in power production is less than what may be obtained by a simple extension of the wing in the radial direction. The computations also show that shorter downwind winglets (>2%) come close to the increase in Cp obtained by a radial extension of the wing. Lastly, the results from the code are used to design a rotor with a 2% downwind winglet, which is computed using the Navier-Stokes solver EllipSys3D. These computations show that further work is needed to validate the FWLL code for cases where the rotor is equipped with winglets
Determination of the Maximum Aerodynamic Efficiency of Wind Turbine Rotors with Winglets
Energy Technology Data Exchange (ETDEWEB)
Gaunaa, Mac; Johansen, Jeppe [Senior Scientists, Risoe National Laboratory, Roskilde, DK-4000 (Denmark)
2007-07-15
The present work contains theoretical considerations and computational results on the nature of using winglets on wind turbines. The theoretical results presented show that the power augmentation obtainable with winglets is due to a reduction of tip-effects, and is not, as believed up to now, caused by the downwind vorticity shift due to downwind winglets. The numerical work includes optimization of the power coefficient for a given tip speed ratio and geometry of the span using a newly developed free wake lifting line code, which takes into account also viscous effects and self induced forces. Validation of the new code with CFD results for a rotor without winglets showed very good agreement. Results from the new code with winglets indicate that downwind winglets are superior to upwind ones with respect to optimization of Cp, and that the increase in power production is less than what may be obtained by a simple extension of the wing in the radial direction. The computations also show that shorter downwind winglets (>2%) come close to the increase in Cp obtained by a radial extension of the wing. Lastly, the results from the code are used to design a rotor with a 2% downwind winglet, which is computed using the Navier-Stokes solver EllipSys3D. These computations show that further work is needed to validate the FWLL code for cases where the rotor is equipped with winglets.
de Janvry, Alain; Sadoulet, Elisabeth
2006-01-01
Conditional cash transfer programs are now used extensively to encourage poor parents to increase investments in their children's human capital. These programs can be large and expensive, motivating a quest for greater efficiency through increased impact of the programs' imposed conditions on human capital formation. This requires designing the programs' targeting and calibration rules spe...
International Nuclear Information System (INIS)
Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock
2016-01-01
Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.
EFFICIENCY OF ISO 9001 IN PORTUGAL: A QUALITATIVE STUDY FROM A HOLISTIC THEORETICAL PERSPECTIVE
Directory of Open Access Journals (Sweden)
Alcina Dias
2013-03-01
Full Text Available The purpose of this paper is to analy se the efficiency of ISO 9001 from a holistic theoretical approach where the Contingency theory, the Institutional theory and the Resources-Based View are integrated. The study was carried out in companies of different sectors of activity in Portugal, based on a qualitative methodology (interviews. The fact of the interviews having been undertaken under an ISO 9001 structure made it easier for companies to grasp the issues under investigation. An ISO 9001 characterisation was carried out on a theoretical framework approach and findings point out efficiency gains and revealed that the absence of ISO 9001 would work as a competitive disadvantage. The contribution of this research aims to reinforce the state of art as concerns the theoretical scope of analysis of these issues enriched by the case study achievement.
Quantum Coherent Three-Terminal Thermoelectrics: Maximum Efficiency at Given Power Output
Directory of Open Access Journals (Sweden)
Robert S. Whitney
2016-05-01
Full Text Available This work considers the nonlinear scattering theory for three-terminal thermoelectric devices used for power generation or refrigeration. Such systems are quantum phase-coherent versions of a thermocouple, and the theory applies to systems in which interactions can be treated at a mean-field level. It considers an arbitrary three-terminal system in any external magnetic field, including systems with broken time-reversal symmetry, such as chiral thermoelectrics, as well as systems in which the magnetic field plays no role. It is shown that the upper bound on efficiency at given power output is of quantum origin and is stricter than Carnot’s bound. The bound is exactly the same as previously found for two-terminal devices and can be achieved by three-terminal systems with or without broken time-reversal symmetry, i.e., chiral and non-chiral thermoelectrics.
Directory of Open Access Journals (Sweden)
Hongmin Meng
2017-07-01
Full Text Available In wind turbine control, maximum power point tracking (MPPT control is the main control mode for partial-load regimes. Efficiency potentiation of energy conversion and power smoothing are both two important control objectives in partial-load regime. However, on the one hand, low power fluctuation signifies inefficiency of energy conversion. On the other hand, enhancing efficiency may increase output power fluctuation as well. Thus the two objectives are contradictory and difficult to balance. This paper proposes a flexible MPPT control framework to improve the performance of both conversion efficiency and power smoothing, by adaptively compensating the torque reference value. The compensation was determined by a proposed model predictive control (MPC method with dynamic weights in the cost function, which improved control performance. The computational burden of the MPC solver was reduced by transforming the cost function representation. Theoretical analysis proved the good stability and robustness. Simulation results showed that the proposed method not only kept efficiency at a high level, but also reduced power fluctuations as much as possible. Therefore, the proposed method could improve wind farm profits and power grid reliability.
Optimizing WiMAX: Mitigating Co-Channel Interference for Maximum Spectral Efficiency
International Nuclear Information System (INIS)
Ansari, A.Q.; Memon, A.L.; Qureshi, I.A.
2016-01-01
The efficient use of radio spectrum is one of the most important issues in wireless networks because spectrum is generally limited and wireless environment is constrained to channel interference. To cope up and for increased usefulness of radio spectrum wireless networks use frequency reuse technique. The frequency reuse technique allows the use of same frequency band in different cells of same network considering inter-cell distance and resulting interference level. WiMAX (Worldwide Interoperability for Microwave Access) PHY profile is designed to use FRF (Frequency Reuse Factor) of one. When FRF of one is used it results in an improved spectral efficacy but also results in CCI (Co-Channel interference) at cell boundaries. The effect of interference is always required to be measured so that some averaging/ minimization techniques may be incorporated to keep the interference level up to some acceptable threshold in wireless environment. In this paper, we have analyzed, that how effectively CCI impact can be mitigated by using different subcarrier permutation types presented in IEEE 802.16 standard. A simulation based analysis is presented wherein impact of using same and different permutation base in adjacent cells in a WiMAX network on CCI, under varying load conditions is analyzed. We have further studied the effect of permutation base in environment where frequency reuse technique is used in conjunction with cell sectoring for better utilization of radio spectrum. (author)
Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.
Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich
2016-01-01
We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.
Energy Technology Data Exchange (ETDEWEB)
Thompson, William L.; Lee, Danny C.
2000-11-01
Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.
Theoretical determination of the neutron detection efficiency of plastic track detectors. Pt. 1
International Nuclear Information System (INIS)
Pretzsch, G.
1982-01-01
A theoretical model to determine the neutron detection efficiency of organic solid state nuclear track detectors without external radiator is described. The model involves the following calculation steps: production of heavy charged particles within the detector volume, characterization of the charged particles by appropriate physical quantities, application of suitable registration criteria, formation of etch pits. The etch pits formed are described by means of a distribution function which is doubly differential in both diameter and depth of the etch pits. The distribution function serves as the input value for the calculation of the detection efficiency. The detection efficiency is defined as the measured effect per neutron fluence. Hence it depends on the evaluation technique considered. The calculation of the distribution function is carried out for cellulose triacetate. The determination of the concrete detection efficiency using the light microscope and light transmission measurements as the evaluation technique will be described in further publications. (orig.)
DEFF Research Database (Denmark)
Haller, Michel; Cruickshank, Chynthia; Streicher, Wolfgang
2009-01-01
This paper reviews different methods that have been proposed to characterize thermal stratification in energy storages from a theoretical point of view. Specifically, this paper focuses on the methods that can be used to determine the ability of a storage to promote and maintain stratification...... during charging, storing and discharging, and represent this ability with a single numerical value in terms of a stratification efficiency for a given experiment or under given boundary conditions. Existing methods for calculating stratification efficiencies have been applied to hypothetical storage...
Elsyad, Moustafa Abdou; Khairallah, Ahmed Samir
2017-06-01
This crossover study aimed to evaluate and compare chewing efficiency and maximum bite force (MBF) with resilient telescopic and bar attachment systems of implant overdentures in patients with atrophied mandibles. Ten participants with severely resorbed mandibles and persistent denture problems received new maxillary and mandibular conventional dentures (control, CD). After 3 months of adaptation, two implants were inserted in canine region of the mandible. In a quasi-random method, overdentures were connected to the implants with either bar overdentures (BOD) or resilient telescopic overdentures (TOD) attachment systems. Chewing efficiency in terms of unmixed fraction (UF) was measured using chewing gum (after 5, 10, 20, 30 and 50 strokes), and MBF was measured using a bite force transducer. Measurements were performed 3 months after using each of the following prostheses: CD, BOD and TOD. Chewing efficiency and MBF increased significantly with BOD and TOD compared to CD. As the number of chewing cycles increased, the UF decreased. TOD recorded significant higher chewing efficiency and MBF than BOD. Resilient telescopic attachments are associated with increased chewing efficiency and MBF compared bar attachments when used to retain overdentures to the implants in patients with atrophied mandibles. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare
2013-04-01
Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian; Wu, Jin; Wu, Xiaocui; Gioli, Beniamino; Wohlfahrt, Georg; Cescatti, Alessandro; van der Tol, Christiaan; Zhou, Sha; Gough, Christopher M.; Gentine, Pierre; Zhang, Yongguang; Steinbrecher, Rainer; Ardö, Jonas
2018-04-01
Light-use efficiency (LUE), which quantifies the plants' efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production estimation. Here we use satellite-based solar-induced chlorophyll fluorescence as a proxy for photosynthetically active radiation absorbed by chlorophyll (APARchl) and derive an estimation of the fraction of APARchl (fPARchl) from four remotely sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (ɛmaxchl), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPARchl, suggesting the corresponding ɛmaxchl to have less seasonal variation. This spatio-temporal convergence of LUE derived from fPARchl can be used to build simple but robust gross primary production models and to better constrain process-based models.
Harne, Ryan L
2012-07-01
Conversion of ambient vibrational energy into electric power has been the impetus of much modern research. The traditional analysis has focused on absolute electrical power output from the harvesting devices and efficiency defined as the convertibility of an infinite resource of vibration excitation into power. This perspective has limited extensibility when applying resonant harvesters to host resonant structures when the inertial influence of the harvester is more significant. Instead, this work pursues a fundamental understanding of the coupled dynamics of a main mass-spring-damper system to which an electromagnetic or piezoelectric mass-spring-damper is attached. The governing equations are derived, a metric of efficiency is presented, and analysis is undertaken. It is found that electromagnetic energy harvesting efficiency and maximum power output is limited by the strength of the coupling such that no split system resonances are induced for a given mass ratio. For piezoelectric harvesters, only the coupling strength and certain design requirements dictate maximum power and efficiency achievable. Since the harvesting circuitry must "follow" the split resonances as the piezoelectric harvesters become more massive, the optimum design of piezoelectric harvesters appears to be more involved than for electromagnetic devices.
High-efficiency dielectric barrier Xe discharge lamp: theoretical and experimental investigations
International Nuclear Information System (INIS)
Beleznai, Sz; Mihajlik, G; Agod, A; Maros, I; Juhasz, R; Nemeth, Zs; Jakab, L; Richter, P
2006-01-01
A dielectric barrier Xe discharge lamp producing vacuum-ultraviolet radiation with high efficiency was investigated theoretically and experimentally. The cylindrical glass body of the lamp is equipped with thin strips of metal electrodes applied to diametrically opposite sides of the outer surface. We performed a simulation of discharge plasma properties based on one-dimensional fluid dynamics and also assessed the lamp characteristics experimentally. Simulation and experimental results are analysed and compared in terms of voltage and current characteristics, power input and discharge efficiency. Using the proposed lamp geometry and fast rise-time short square pulses of the driving voltage, an intrinsic discharge efficiency around 56% was predicted by simulation, and more than 60 lm W -1 lamp efficacy (for radiation converted into visible green light by phosphor coating) was demonstrated experimentally
DEFF Research Database (Denmark)
Rechenbach, Björn; Willatzen, Morten; Lassen, Benny
2016-01-01
The electromechanical efficiency of a loaded tubular dielectric elastomer actuator (DEA) is investigated theoretically. In previous studies, the external system, on which the DEA performs mechanical work, is implemented implicitly by prescribing the stroke of the DEA in a closed operation cycle....... Here, a more generic approach, modelling the external system by a frequency-dependent mechanical impedance which exerts a certain force on the DEA depending on its deformation, is chosen. It admits studying the dependence of the electromechanical efficiency of the DEA on the external system. A closed...... operation cycle is realized by exciting the DEA electrically by a sinusoidal voltage around a bias voltage. A detailed parametric study shows that the electromechanical efficiency is highly dependent on the frequency, amplitude, and bias of the excitation voltage and the mechanical impedance of the external...
Theoretical Bound of CRLB for Energy Efficient Technique of RSS-Based Factor Graph Geolocation
Kahar Aziz, Muhammad Reza; Heriansyah; Saputra, EfaMaydhona; Musa, Ardiansyah
2018-03-01
To support the increase of wireless geolocation development as the key of the technology in the future, this paper proposes theoretical bound derivation, i.e., Cramer Rao lower bound (CRLB) for energy efficient of received signal strength (RSS)-based factor graph wireless geolocation technique. The theoretical bound derivation is crucially important to evaluate whether the energy efficient technique of RSS-based factor graph wireless geolocation is effective as well as to open the opportunity to further innovation of the technique. The CRLB is derived in this paper by using the Fisher information matrix (FIM) of the main formula of the RSS-based factor graph geolocation technique, which is lied on the Jacobian matrix. The simulation result shows that the derived CRLB has the highest accuracy as a bound shown by its lowest root mean squared error (RMSE) curve compared to the RMSE curve of the RSS-based factor graph geolocation technique. Hence, the derived CRLB becomes the lower bound for the efficient technique of RSS-based factor graph wireless geolocation.
Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding
DEFF Research Database (Denmark)
Ukhanova, Ann; Milani, Simone; Forchhammer, Søren
2013-01-01
profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement......This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Directory of Open Access Journals (Sweden)
JongHyup Lee
2016-08-01
Full Text Available For practical deployment of wireless sensor networks (WSN, WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.
Lee, JongHyup; Pak, Dohyun
2016-01-01
For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743
Theoretical and methodological grounds of formation of the efficient system of higher education
Directory of Open Access Journals (Sweden)
Raevneva Elena V.
2013-03-01
Full Text Available The goal of the article lies in generalisation of the modern theoretical and methodological, methodical and instrumentation provision of building of efficient system of higher education. Analysis of literature on the problems of building educational systems shows that there is a theoretical and methodological and instrumentation level of study of this issue. The article considers a theoretical and methodological level of the study and specifies theories and philosophic schools, concepts, educational paradigms and scientific approaches used during formation of the educational paradigm. The article considers models of education and models and technologies of learning as instrumental provision. In the result of the analysis the article makes a conclusion that the humanistic paradigm, which is based on the competency building approach and which assumes the use of modern (innovation technologies of learning, should be in the foundation of reformation of the system of higher education. The prospect of further studies in this directions is formation of competences of potential specialists (graduates of higher educational establishments with consideration of requirements of employers and market in general.
Maximum Exergetic Efficiency Operation of a Solar Powered H2O-LiBr Absorption Cooling System
Directory of Open Access Journals (Sweden)
Camelia Stanciu
2017-12-01
Full Text Available A solar driven cooling system consisting of a single effect H2O-LiBr absorbtion cooling module (ACS, a parabolic trough collector (PTC, and a storage tank (ST module is analyzed during one full day operation. The pressurized water is used to transfer heat from PTC to ST and to feed the ACS desorber. The system is constrained to operate at the maximum ACS exergetic efficiency, under a time dependent cooling load computed on 15 July for a one storey house located near Bucharest, Romania. To set up the solar assembly, two commercial PTCs were selected, namely PT1-IST and PTC 1800 Solitem, and a single unit ST was initially considered. The mathematical model, relying on the energy balance equations, was coded under Engineering Equation Solver (EES environment. The solar data were obtained from the Meteonorm database. The numerical simulations proved that the system cannot cover the imposed cooling load all day long, due to the large variation of water temperature inside the ST. By splitting the ST into two units, the results revealed that the PT1-IST collector only drives the ACS between 9 am and 4:30 pm, while the PTC 1800 one covers the entire cooling period (9 am–6 pm for optimum ST capacities of 90 kg/90 kg and 90 kg/140 kg, respectively.
Elhkim, Mostafa Ould; Héraud, Fanny; Bemrah, Nawel; Gauchard, Françoise; Lorino, Tristan; Lambré, Claude; Frémy, Jean Marc; Poul, Jean-Michel
2007-04-01
Tartrazine is an artificial azo dye commonly used in human food and pharmaceutical products. Since the last assessment carried out by the JECFA in 1964, many new studies have been conducted, some of which have incriminated tartrazine in food intolerance reactions. The aims of this work are to update the hazard characterization and to revaluate the safety of tartrazine. Our bibliographical review of animal studies confirms the initial hazard assessment conducted by the JECFA, and accordingly the ADI established at 7.5mg/kg bw. From our data, in France, the estimated maximum theoretical intake of tartrazine in children is 37.2% of the ADI at the 97.5th percentile. It may therefore be concluded that from a toxicological point of view, tartrazine does not represent a risk for the consumer. It appears more difficult to show a clear relationship between ingestion of tartrazine and the development of intolerance reactions in patients. These reactions primarily occur in patients who also suffer from recurrent urticaria or asthma. The link between tartrazine consumption and these reactions is often overestimated, and the pathogenic mechanisms remain poorly understood. The prevalence of tartrazine intolerance is estimated to be less than 0.12% in the general population. Generally, the population at risk is aware of the importance of food labelling, with the view of avoiding consumption of tartrazine. However, it has to be mentioned that products such as ice creams, desserts, cakes and fine bakery are often sold loose without any labelling.
Rajagopal, Adharsh; Yang, Zhibin; Jo, Sae Byeok; Braly, Ian L; Liang, Po-Wei; Hillhouse, Hugh W; Jen, Alex K-Y
2017-09-01
Organic-inorganic hybrid perovskite multijunction solar cells have immense potential to realize power conversion efficiencies (PCEs) beyond the Shockley-Queisser limit of single-junction solar cells; however, they are limited by large nonideal photovoltage loss (V oc,loss ) in small- and large-bandgap subcells. Here, an integrated approach is utilized to improve the V oc of subcells with optimized bandgaps and fabricate perovskite-perovskite tandem solar cells with small V oc,loss . A fullerene variant, Indene-C 60 bis-adduct, is used to achieve optimized interfacial contact in a small-bandgap (≈1.2 eV) subcell, which facilitates higher quasi-Fermi level splitting, reduces nonradiative recombination, alleviates hysteresis instabilities, and improves V oc to 0.84 V. Compositional engineering of large-bandgap (≈1.8 eV) perovskite is employed to realize a subcell with a transparent top electrode and photostabilized V oc of 1.22 V. The resultant monolithic perovskite-perovskite tandem solar cell shows a high V oc of 1.98 V (approaching 80% of the theoretical limit) and a stabilized PCE of 18.5%. The significantly minimized nonideal V oc,loss is better than state-of-the-art silicon-perovskite tandem solar cells, which highlights the prospects of using perovskite-perovskite tandems for solar-energy generation. It also unlocks opportunities for solar water splitting using hybrid perovskites with solar-to-hydrogen efficiencies beyond 15%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Shevchenko O.
2017-08-01
Full Text Available In the article modern scientific and theoretical positions concerning determination of the effectiveness of soil protection measures on agricultural lands are investigated. It is analyzed that the protection of land from degradation is one of the most important problems of agriculture, as this process leads to a significant decrease in soil fertility and crop yields. That is why in today's conditions, when the protection of agricultural land became urgent and a priority task, the scientific substantiation of the economic assessment of the damage caused by the degradation of land to agriculture, as well as the development of methods for determining the economic efficiency of the most progressive soil protection measures, technologies and complexes based on their overall Comparative evaluation. It was established that ground protection measures are a system of various measures aimed at reducing the negative degradation effect on the soil cover and ensuring the preservation and reproduction of soil fertility and integrity, as well as increasing their productivity as a result of rational use. The economic essence of soil protection measures is the economic effect achieved by preventing damage caused by land degradation to agriculture, as well as for obtaining additional profit as a result of their action. The economic effectiveness of soil protection measures means their effectiveness, that is, the correlation between the results and the costs that they provided. The excess of the economic result over the cost of its achievement indicates the economic efficiency of soil protection measures, and the difference between the result and the expenditure characterizes the economic effect. Ecological efficiency is characterized by environmental parameters of the soil cover, namely: the weakening of degradation effects on soils; improvement of their qualitative properties; An increase in production without violation of environmental standards, etc. Economic
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
International Nuclear Information System (INIS)
Ye Zhuo-Lin; Li Wei-Sheng; Lai Yi-Ming; He Ji-Zhou; Wang Jian-Hui
2015-01-01
We propose a quantum-mechanical Brayton engine model that works between two superposed states, employing a single particle confined in an arbitrary power-law trap as the working substance. Applying the superposition principle, we obtain the explicit expressions of the power and efficiency, and find that the efficiency at maximum power is bounded from above by the function: η_+ = θ/(θ + 1), with θ being a potential-dependent exponent. (paper)
Directory of Open Access Journals (Sweden)
Kuznetsova M.M.
2014-08-01
Full Text Available The article presents results of theoretical and experimental research of grinding process of bulk materials in a ball mill. The new method of determination of energy efficiently mode of operation of ball mills in a process of a cement clinker grinding is proposed and experimentally tested.
Oliehoek, F.A.; Visser, A.; Babuška, R.; Groen, F.C.A
2010-01-01
This chapter gives an overview of the state of the art in decision-theoretic models to describe cooperation between multiple agents in a dynamic environment. Making (near-) optimal decisions in such settings gets harder when the number of agents grows or the uncertainty about the environment
International Nuclear Information System (INIS)
Uosif, M.A.; El-Taher, A.
2005-01-01
A new fit function has been developed to calculate theoretically the absolute gamma ray detection efficiencies (ηTh) of a cylindrical NaI(Tl) crystal, for calculating the absolute efficiency at any interesting gamma energy in the energy range between 10 and 1300 keV and distance between 0 and 8 cm. The total absolute gamma ray detection efficiencies have been calculated for five detectors, four are 2x2 and one is 3x 3 inches NaI(Tl) crystal at different distances. The absolute efficiency of the different detectors was calculated at the specific energy of the standard sources for each measuring distances. In this calculation, experimental (ηExp) and theoretical (ηTh) have been calculated. The uncertainties of efficiency calibration have been calculated also for quality control. Measurements were performed with calibrated point source. Gamma-ray energies under consideration were 0.356, 0.662, 1.17 and 1.33 MeV. The differences between (ηExp) and (ηTh) at these energies are 1.30E-06, 7.99E-05, 2.29E-04 and 2.42E-04 respectively. The results obtained on the basis of (ηExp) and (ηTh) seem to be in very good agreement
Design and modeling of an SJ infrared solar cell approaching upper limit of theoretical efficiency
Sahoo, G. S.; Mishra, G. P.
2018-01-01
Recent trends of photovoltaics account for the conversion efficiency limit making them more cost effective. To achieve this we have to leave the golden era of silicon cell and make a path towards III-V compound semiconductor groups to take advantages like bandgap engineering by alloying these compounds. In this work we have used a low bandgap GaSb material and designed a single junction (SJ) cell with a conversion efficiency of 32.98%. SILVACO ATLAS TCAD simulator has been used to simulate the proposed model using both Ray Tracing and Transfer Matrix Method (under 1 sun and 1000 sun of AM1.5G spectrum). A detailed analyses of photogeneration rate, spectral response, potential developed, external quantum efficiency (EQE), internal quantum efficiency (IQE), short-circuit current density (JSC), open-circuit voltage (VOC), fill factor (FF) and conversion efficiency (η) are discussed. The obtained results are compared with previously reported SJ solar cell reports.
Energy Technology Data Exchange (ETDEWEB)
Ma Haifeng; Zhou Feng, E-mail: fengzhou@fudan.edu.c [State Key Laboratory of ASIC and System, Fudan University, Shanghai 201203 (China)
2010-01-15
A full on-chip and area-efficient low-dropout linear regulator (LDO) is presented. By using the proposed adaptive frequency compensation (AFC) technique, full on-chip integration is achieved without compromising the LDO's stability in the full output current range. Meanwhile, the use of a compact pass transistor (the compact pass transistor serves as the gain fast roll-off output stage in the AFC technique) has enabled the LDO to be very area-efficient. The proposed LDO is implemented in standard 0.35 {mu}m CMOS technology and occupies an active area as small as 220 x 320 {mu}m{sup 2}, which is a reduction to 58% compared to state-of-the-art designs using technologies with the same feature size. Measurement results show that the LDO can deliver 0-60 mA output current with 54 {mu}A quiescent current consumption and the regulated output voltage is 1.8 V with an input voltage range from 2 to 3.3 V. (semiconductor integrated circuits)
International Nuclear Information System (INIS)
Ma Haifeng; Zhou Feng
2010-01-01
A full on-chip and area-efficient low-dropout linear regulator (LDO) is presented. By using the proposed adaptive frequency compensation (AFC) technique, full on-chip integration is achieved without compromising the LDO's stability in the full output current range. Meanwhile, the use of a compact pass transistor (the compact pass transistor serves as the gain fast roll-off output stage in the AFC technique) has enabled the LDO to be very area-efficient. The proposed LDO is implemented in standard 0.35 μm CMOS technology and occupies an active area as small as 220 x 320 μm 2 , which is a reduction to 58% compared to state-of-the-art designs using technologies with the same feature size. Measurement results show that the LDO can deliver 0-60 mA output current with 54 μA quiescent current consumption and the regulated output voltage is 1.8 V with an input voltage range from 2 to 3.3 V. (semiconductor integrated circuits)
Theoretical investigation of anomalously high efficiency in a three cavity gyroklystron amplifier
International Nuclear Information System (INIS)
Latham, P.E.; Koc, U.V.; Main, W.; Tantawi, S.G.
1992-01-01
The University of Maryland's three cavity gyroklystron amplifier operating at a frequency of 10 GHz, voltage of 425 kV, current of 160 A, and pitch angle (v perpendicular/v z ) near .82, has demonstrated an efficiency of 35%. The author's simulations using fixed field profiles predict a significantly lower efficiency, primarily because of the small pitch angle in the experiment. They will be investigating two methods of improving the efficiency in their simulations: Beam-wave interaction after the output cavity, and modification of the first two cavity Qs due to beam loading. Results of their nonlinear code will be given for both cases
The Concept of Resource Use Efficiency as a Theoretical Basis for Promising Coal Mining Technologies
Mikhalchenko, Vadim
2017-11-01
The article is devoted to solving one of the most relevant problems of the coal mining industry - its high resource use efficiency, which results in high environmental and economic costs of operating enterprises. It is shown that it is the high resource use efficiency of traditional, historically developed coal production systems that generates a conflict between indicators of economic efficiency and indicators of resistance to uncertainty and variability of market environment parameters. The traditional technological paradigm of exploitation of coal deposits also predetermines high, technology-driven, economic risks. The solution is shown and a real example of the problem solution is considered.
Comparison of experimental and theoretical efficiency of HPGe X-ray detector
International Nuclear Information System (INIS)
Mohanty, B.P.; Balouria, P.; Garg, M.L.; Nandi, T.K.; Mittal, V.K.; Govil, I.M.
2008-01-01
The low energy high purity germanium (HPGe) detectors are being increasingly used for the quantitative estimation of elements using X-ray spectrometric techniques. The softwares used for quantitative estimation normally evaluate model based efficiency of detector using manufacturer supplied detector physical parameters. The present work shows that the manufacturer supplied detector parameters for low energy HPGe detectors need to be verified by comparing model based efficiency with the experimental ones. This is particularly crucial for detectors with ion implanted P type contacts
Iwahashi, Toshihiko; Ogawa, Makoto; Hosokawa, Kiyohito; Kato, Chieri; Inohara, Hidenori
2017-10-01
The hypotheses of the present study were that the maximum frequency of fluctuation of electroglottographic (EGG) signals in the expiration phase of volitional cough (VC) reflects the cough efficiency and that this EGG parameter is affected by impaired laryngeal closure, expiratory effort strength, and gender. For 20 normal healthy adults and 20 patients diagnosed with unilateral vocal fold paralysis (UVFP), each participant was fitted with EGG electrodes on the neck, had a transnasal laryngo-fiberscope inserted, and was asked to perform weak/strong VC tasks while EGG signals and a high-speed digital image of the larynx were recorded. The maximum frequency was calculated in the EGG fluctuation region coinciding with vigorous vocal fold vibration in the laryngeal HSDIs. In addition, each participant underwent spirometry for measurement of three aerodynamic parameters, including peak expiratory air flow (PEAF), during weak/strong VC tasks. Significant differences were found for both maximum EGG frequency and PEAF between the healthy and UVFP groups and between the weak and strong VC tasks. Among the three cough aerodynamic parameters, PEAF showed the highest positive correlation with the maximum EGG frequency. The correlation coefficients between the maximum EGG frequency and PEAF recorded simultaneously were 0.574 for the whole group, and 0.782/0.717/0.823/0.688 for the male/female/male-healthy/male-UVFP subgroups, respectively. Consequently, the maximum EGG frequency measured in the expiration phase of VC was shown to reflect the velocity of expiratory airflow to some extent and was suggested to be affected by vocal fold physical properties, glottal closure condition, and the expiratory function.
International Nuclear Information System (INIS)
Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.
1996-01-01
Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs
Directory of Open Access Journals (Sweden)
Hideyuki Usa
2017-01-01
Full Text Available This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm was calculated. Body weight and limb segment length (thigh and lower leg length were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.
Theoretical and empirical approaches to using films as a means to increase communication efficiency.
Directory of Open Access Journals (Sweden)
Kiselnikova, N.V.
2016-07-01
Full Text Available The theoretical framework of this analytic study is based on studies in the field of film perception. Films are considered as a communicative system that is encrypted in an ordered series of shots, and decoding proceeds during perception. The shots are the elements of a cinematic message that must be “read” by viewer. The objective of this work is to analyze the existing theoretical approaches to using films in psychotherapy and education. An original approach to film therapy that is based on teaching clients to use new communicative sets and psychotherapeutic patterns through watching films is presented. The article specifies the main emphasized points in theories of film therapy and education. It considers the specifics of film therapy in the process of increasing the effectiveness of communication. It discusses the advantages and limitations of the proposed method. The contemporary forms of film therapy and the formats of cinema clubs are criticized. The theoretical assumptions and empirical research that could be used as a basis for a method of developing effective communication by means of films are discussed. Our studies demonstrate that the usage of film therapy must include an educational stage for more effective and stable results. This means teaching viewers how to recognize certain psychotherapeutic and communicative patterns in the material of films, to practice the skill of finding as many examples as possible for each pattern and to transfer the acquired schemes of analyzing and recognizing patterns into one’s own life circumstances. The four stages of the film therapeutic process as well as the effects that are achieved at each stage are described in detail. In conclusion, the conditions under which the usage of the film therapy method would be the most effective are observed. Various properties of client groups and psychotherapeutic scenarios for using the method of active film therapy are described.
International Nuclear Information System (INIS)
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-01-01
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. (paper)
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Oikonomou, V.; Jepma, C.J.; Becchis, F.; Russolillo, D.
2008-01-01
In this paper we analyze interactions of two energy policy instruments, namely a White Certificates (WhC) scheme as an innovative policy instrument for energy efficiency improvement and energy taxation. These policy instruments differ in terms of objectives and final impacts on the price of
Ramachandran, Hema; Pillai, K. P. P.; Bindu, G. R.
2017-08-01
A two-port network model for a wireless power transfer system taking into account the distributed capacitances using PP network topology with top coupling is developed in this work. The operating and maximum power transfer efficiencies are determined analytically in terms of S-parameters. The system performance predicted by the model is verified with an experiment consisting of a high power home light load of 230 V, 100 W and is tested for two forced resonant frequencies namely, 600 kHz and 1.2 MHz. The experimental results are in close agreement with the proposed model.
Gritti, Fabrice
2017-02-17
Superficially porous particles (SPPs) can be prepared from a pseudomorphic transformation (PMT) which produces straight, unconnected, and radially-oriented mesopores (ROMs). ROMs can be either both ends open in fully porous particles (FPPs) or one-end-closed in SPPs. The impact of ROMs on the longitudinal diffusion (B/u), solid-liquid mass transfer resistance (C s u), and on the eddy dispersion (A(u)) height equivalent to a theoretical plate (HETP) of 3D randomly packed columns was investigated based on theoretical viewpoints. Torquato's theory of effective diffusion in packed beds (B term), Giddings' coupling theory of eddy dispersion (A term), and Giddings' generalized nonequilibrium theory (C s term) are applied to make predictions. First, it is found that the A term is nearly independent on the internal structure of the particle. Secondly, in the absence of flow, infinitely narrow and both ends open (no constriction effect) ROMs induce an internal hindrance factor of 23 regarding diffusion along the axial direction. Experimental data reveal that one-end-closed and 80Å wide ROMs in SPPs lead to a measurable internal hindrance factor of 27 regarding diffusion in the porous shell. Thirdly, above the optimum speed, the C s coefficient is dependent on the geometry (cylinders, cones, etc.) of the ROMs: when ROMs are conical in SPPs, C s is expected to decrease by 80% with respect to cylindrical ROMs. From an application perspective, PMT-SPPs prepared with narrow ROMs are well suited for the analysis of small molecules at or below optimum speed (lowest B term) while PMT-SPPs made of wide and conical ROMs are ideal for the analysis of large molecules above optimum speed (smallest C s term). Copyright © 2017. Published by Elsevier B.V.
Busarov, S. S.; Vasil'ev, V. K.; Busarov, I. S.; Titov, D. S.; Panin, Ju. N.
2017-08-01
Developed earlier and tested in such working fluid as air, the technology of calculating the operating processes of slow-speed long-stroke reciprocating stages let the authors to obtain successful results concerning compression of gases to medium pressures in one stage. In this connection, the question of the efficiency of the application of slow-speed long-stroke stages in various fields of technology and the national economy, where the working fluid is other gas or gas mixture, is topical. The article presents the results of the efficiency evaluation of single-stage compressor units on the basis of such stages for cases when ammonia, hydrogen, helium or propane-butane mixture is used as the working fluid.
Yilmaz, Y. A.; Tandogan, S. E.; Hayran, Z.; Giden, I. H.; Turduev, M.; Kurt, H.
2017-07-01
Integrated photonic systems require efficient, compact, and broadband solutions for strong light coupling into and out of optical waveguides. The present work investigates an efficient optical power transferring the problem between optical waveguides having different widths of in/out terminals. We propose a considerably practical and feasible concept to implement and design an optical coupler by introducing gradually index modulation to the coupler section. The index profile of the coupler section is modulated with a Gaussian function by the help of striped waveguides. The effective medium theory is used to replace the original spatially varying index profile with dielectric stripes of a finite length/width having a constant effective refractive index. 2D and 3D finite-difference time-domain analyzes are utilized to investigate the sampling effect of the designed optical coupler and to determine the parameters that play a crucial role in enhancing the optical power transfer performance. Comparing the coupling performance of conventional benchmark adiabatic and butt couplers with the designed striped waveguide coupler, the corresponding coupling efficiency increases from approximately 30% to 95% over a wide frequency interval. In addition, to realize the realistic optical coupler appropriate to integrated photonic applications, the proposed structure is numerically designed on a silicon-on-insulator wafer. The implemented SOI platform based optical coupler operates in the telecom wavelength regime (λ = 1.55 μm), and the dimensions of the striped coupler are kept as 9.77 μm (along the transverse to propagation direction) and 7.69 μm (along the propagation direction) where the unit distance is fixed to be 465 nm. Finally, to demonstrate the operating design principle, the microwave experiments are conducted and the spot size conversion ratio as high as 7.1:1 is measured, whereas a coupling efficiency over 60% in the frequency range of 5.0-16.0 GHz has been also
International Nuclear Information System (INIS)
Rodriguez-Rodriguez, A.; Correa-Alfonso, C.M.; Lopez-Pino, N.; Padilla-Cabal, F.; D'Alessandro, K.; Corrales, Y.; Garcia-Alvarez, J. A.; Perez-Mellor, A.; Baly-Gil, L.; Machado, A.
2011-01-01
A highly detailed characterization of a 130 cm 3 n-type HPGe detector, employed in low - background gamma spectrometry measurements, was done. Precise measured data and several Monte Carlo (MC) calculations have been combined to optimize the detector parameters. HPGe crystal location inside the Aluminum end-cap as well as its dimensions, including the borehole radius and height, were determined from frontal and lateral scans. Additionally, X-ray radiography and Computed Axial Tomography (CT) studies were carried out to complement the information about detector features. Using seven calibrated point sources ( 241 Am, 133 Ba, 57,60 Co, 137 Cs, 22 Na and 152 Eu), photo-peak efficiency curves at three different source - detector distances (SDD) were obtained. Taking into account the experimental values, an optimization procedure by means of MC simulations (MCNPX 2.6 code) were performed. MC efficiency curves were calculated specifying the optimized detector parameters in the MCNPX input files. Efficiency calculation results agree with empirical data, showing relative deviations lesser 10%. (Author)
Energy Technology Data Exchange (ETDEWEB)
Barrera, Manuel, E-mail: manuel.barrera@uca.es [Escuela Superior de Ingeniería, University of Cadiz, Avda, Universidad de Cadiz 10, 11519 Puerto Real, Cadiz (Spain); Suarez-Llorens, Alfonso [Facultad de Ciencias, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cadiz (Spain); Casas-Ruiz, Melquiades; Alonso, José J.; Vidal, Juan [CEIMAR, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cádiz (Spain)
2017-05-11
A generic theoretical methodology for the calculation of the efficiency of gamma spectrometry systems is introduced in this work. The procedure is valid for any type of source and detector and can be applied to determine the full energy peak and the total efficiency of any source-detector system. The methodology is based on the idea of underlying probability of detection, which describes the physical model for the detection of the gamma radiation at the particular studied situation. This probability depends explicitly on the direction of the gamma radiation, allowing the use of this dependence the development of more realistic and complex models than the traditional models based on the point source integration. The probability function that has to be employed in practice must reproduce the relevant characteristics of the detection process occurring at the particular studied situation. Once the probability is defined, the efficiency calculations can be performed in general by using numerical methods. Monte Carlo integration procedure is especially useful to perform the calculations when complex probability functions are used. The methodology can be used for the direct determination of the efficiency and also for the calculation of corrections that require this determination of the efficiency, as it is the case of coincidence summing, geometric or self-attenuation corrections. In particular, we have applied the procedure to obtain some of the classical self-attenuation correction factors usually employed to correct for the sample attenuation of cylindrical geometry sources. The methodology clarifies the theoretical basis and approximations associated to each factor, by making explicit the probability which is generally hidden and implicit to each model. It has been shown that most of these self-attenuation correction factors can be derived by using a common underlying probability, having this probability a growing level of complexity as it reproduces more precisely
Directory of Open Access Journals (Sweden)
M. Girotto
2012-06-01
Full Text Available Esta pesquisa teve como objetivo avaliar a velocidade e intensidade de ação do hexazinone isolado e em mistura com outros inibidores do fotossistema II, através da eficiência fotossintética de Panicum maximum em pós-emergência. O ensaio foi constituído de seis tratamentos: hexazinone (250 g ha-1, tebuthiuron (1,0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1 + 0,5 kg ha-1, diuron (2.400 g ha-1, hexazinone + diuron (125 + 1.200 g ha-1, metribuzin (1.440 g ha-1, hexazinone + metribuzin (125 + 720 g ha-1 e uma testemunha. O experimento foi instalado em delineamento inteiramente casualizado, com quatro repetições. Após a aplicação dos tratamentos, as plantas foram transportadas para casa de vegetação sob condições controladas de temperatura e umidade, onde ficaram durante o período experimental, sendo realizadas as seguintes avaliações: taxa de transporte de elétrons e análise visual de intoxicação. A avaliação com o fluorômetro foi realizada nos intervalos de 1, 2, 6, 24, 48, 72, 120 e 168 horas após a aplicação, e as avaliações visuais, aos três e sete dias após a aplicação. Os resultados demonstraram diferença nos tratamentos, enfatizando a aplicação do diuron, que reduziu lentamente o transporte de elétrons comparado com os outros herbicidas e, em mistura com hexazinone, apresentou efeito sinérgico. Verificou-se com o uso do fluorômetro a intoxicação antecipada em plantas de P. maximum após a aplicação de herbicidas inibidores do fotossistema II de forma isolada e em mistura.This work aimed to evaluate the speed and intensity of action of hexazinone applied alone and in combination with other photo-system II inhibitors on the photosynthetic efficiency of Panicum maximum in post-emergence. The assay consisted of six treatments: hexazinone (250 g ha-1, tebuthiuron (1.0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1+ 0.5 kg ha-1, diuron (2,400 g ha-1, hexazinone + diuron (125 + 1,200 g ha-1, metribuzin
International Nuclear Information System (INIS)
Sarr, Joachim-André Raymond; Mathieu-Potvin, François
2016-01-01
Highlights: • A new stratagem is proposed to improve thermal efficiency of Rankine cycles. • Three new configurations are optimized by means of numerical simulations. • The Rankine-1SCR design is advantageous for 1338 different fluid combinations. • The Rankine-2SCR design is advantageous for 772 different fluid combinations. • The Rankine-3SCR design is advantageous for 768 different fluid combinations. - Abstract: In this paper, three different modifications of the basic Rankine thermodynamic cycle are proposed. The objective is to increase the thermal efficiency of power systems based on Rankine cycles. The three new systems are named “Rankine-1SCR”, “Rankine-2SCR”, and “Rankine-3SCR” cycles, and they consist of linking a refrigeration cycle to the basic Rankine cycle. The idea is to use the refrigeration cycle to create a low temperature heat sink for the Rankine cycle. These three new power plant configurations are modeled and optimized with numerical tools, and then they are compared with the basic Rankine cycle. The objective function is the thermal efficiency of the systems (i.e., net power output (kW) divided by heat rate (kW) entering the system), and the design variables are the operating temperatures within the systems. Among the 84 × 84 (i.e., 7056) possible combinations of working and cooling fluids investigated in this paper, it is shown that: (i) the Rankine-1SCR system is advantageous for 1338 different fluid combinations, (ii) the Rankine-2SCR system is advantageous for 772 different fluid combinations, and (iii) the Rankine-3SCR system is advantageous for 768 different fluid combinations.
International Nuclear Information System (INIS)
Smith, D.F.; Orwig, L.E.
1982-01-01
A method for predicting the hard X-ray spectrum in the 10--100 keV range for compact flares during their initial rise is developed on the basis of a thermal model. Observations of the flares of 1980 April 13, 4:05 U.T., and 1980 May 9, 7:12 U.T. are given and their combined spectra from the Hard X-ray Burst Spectrometer and Hard X-ray Imaging Spectrometer on the Solar Maximum Mission are deduced. Constraints on the cross sectional area of the supposed emitting arch are obtained from data from the Hard X-ray Imaging Spectrometer. A power-law spectrum is predicted for the rise of the flare of April 13 for initial arch densities less than 10 10 cm -3 and also for the flare of May 9 for initial arch densities less than 5.4 x 10 10 cm -3 . In both cases power-law spectra are observed. Limitations and implications of these results are discussed
A Game-Theoretical Approach for Spectrum Efficiency Improvement in Cloud-RAN
Directory of Open Access Journals (Sweden)
Zhuofu Zhou
2016-01-01
Full Text Available As tremendous mobile devices access to the Internet in the future, the cells which can provide high data rate and more capacity are expected to be deployed. Specifically, in the next generation of mobile communication 5G, cloud computing is supposed to be applied to radio access network. In cloud radio access network (Cloud-RAN, the traditional base station is divided into two parts, that is, remote radio heads (RRHs and base band units (BBUs. RRHs are geographically distributed and densely deployed, so as to achieve high data rate and low latency. However, the ultradense deployment inevitably deteriorates spectrum efficiency due to the severer intercell interference among RRHs. In this paper, the downlink spectrum efficiency can be improved through the cooperative transmission based on forming the coalitions of RRHs. We formulate the problem as a coalition formation game in partition form. In the process of coalition formation, each RRH can join or leave one coalition to maximize its own individual utility while taking into account the coalition utility at the same time. Moreover, the convergence and stability of the resulting coalition structure are studied. The numeric simulation result demonstrates that the proposed approach based on coalition formation game is superior to the noncooperative method in terms of the aggregate coalition utility.
Lin, Ronghui; Galan, Sergio Valdes; Sun, Haiding; Hu, Yangrui; Alias, Mohd Sharizal; Janjua, Bilal; Ng, Tien Khee; Ooi, Boon S.; Li, Xiaohang
2018-01-01
A nanowire (NW) structure provides an alternative scheme for deep ultraviolet light emitting diodes (DUV-LEDs) that promises high material quality and better light extraction efficiency (LEE). In this report, we investigate the influence of the tapering angle of closely packed AlGaN NWs, which is found to exist naturally in molecular beam epitaxy (MBE) grown NW structures, on the LEE of NW DUV-LEDs. It is observed that, by having a small tapering angle, the vertical extraction is greatly enhanced for both transverse magnetic (TM) and transverse electric (TE) polarizations. Most notably, the vertical extraction of TM emission increased from 4.8% to 24.3%, which makes the LEE reasonably large to achieve high-performance DUV-LEDs. This is because the breaking of symmetry in the vertical direction changes the propagation of the light significantly to allow more coupling into radiation modes. Finally, we introduce errors to the NW positions to show the advantages of the tapered NW structures can be projected to random closely packed NW arrays. The results obtained in this paper can provide guidelines for designing efficient NW DUV-LEDs.
Lin, Ronghui
2018-04-21
A nanowire (NW) structure provides an alternative scheme for deep ultraviolet light emitting diodes (DUV-LEDs) that promises high material quality and better light extraction efficiency (LEE). In this report, we investigate the influence of the tapering angle of closely packed AlGaN NWs, which is found to exist naturally in molecular beam epitaxy (MBE) grown NW structures, on the LEE of NW DUV-LEDs. It is observed that, by having a small tapering angle, the vertical extraction is greatly enhanced for both transverse magnetic (TM) and transverse electric (TE) polarizations. Most notably, the vertical extraction of TM emission increased from 4.8% to 24.3%, which makes the LEE reasonably large to achieve high-performance DUV-LEDs. This is because the breaking of symmetry in the vertical direction changes the propagation of the light significantly to allow more coupling into radiation modes. Finally, we introduce errors to the NW positions to show the advantages of the tapered NW structures can be projected to random closely packed NW arrays. The results obtained in this paper can provide guidelines for designing efficient NW DUV-LEDs.
Theoretical Comparison of the Energy Conversion Efficiencies of Electrostatic Energy Harvesters
Energy Technology Data Exchange (ETDEWEB)
Kim, Chang-Kyu [Korea Polytechnic University, Siheung (Korea, Republic of)
2017-02-15
The characteristics of a new type of electrostatic energy harvesting device, called an out-of plane overlap harvester, are analyzed for the first time. This device utilizes a movable part that vibrates up and down on the surface of a wafer and a changing overlapping area between the vertical comb fingers. This operational principle enables the minimum capacitance to be close to 0 and significantly increases the energy conversion efficiency per unit volume. The characteristics of the out-of-plane overlap harvester, an in-plane gap-closing harvester, and an in-plane overlap harvester are compared in terms of the length, height, and width of the comb finger and the parasitic capacitance. The efficiency is improved as the length or the height increases and as the width or the parasitic capacitance decreases. In every case, the out-of-plane overlap harvester is able to create more energy and is, thus, preferable over other designs. It is also free from collisions between two electrodes caused by random vibration amplitudes and creates more energy from off axis perturbations. This device, given its small feature size, is expected to provide more energy to various types of wireless electronics devices and to offer high compatibility with other integrated circuits and ease of embedment.
Directory of Open Access Journals (Sweden)
Marianna Chimienti
2014-09-01
Full Text Available Foraging in the marine environment presents particular challenges for air-breathing predators. Information about prey capture rates, the strategies that diving predators use to maximise prey encounter rates and foraging success are still largely unknown and difficult to observe. As well, with the growing awareness of potential climate change impacts and the increasing interest in the development of renewable sources it is unknown how the foraging activity of diving predators such as seabirds will respond to both the presence of underwater structures and the potential corresponding changes in prey distributions. Motivated by this issue we developed a theoretical model to gain general understanding of how the foraging efficiency of diving predators may vary according to landscape structure and foraging strategy. Our theoretical model highlights that animal movements, intervals between prey capture and foraging efficiency are likely to critically depend on the distribution of the prey resource and the size and distribution of introduced underwater structures. For multiple prey loaders, changes in prey distribution affected the searching time necessary to catch a set amount of prey which in turn affected the foraging efficiency. The spatial aggregation of prey around small devices (∼ 9 × 9 m created a valuable habitat for a successful foraging activity resulting in shorter intervals between prey captures and higher foraging efficiency. The presence of large devices (∼ 24 × 24 m however represented an obstacle for predator movement, thus increasing the intervals between prey captures. In contrast, for single prey loaders the introduction of spatial aggregation of the resources did not represent an advantage suggesting that their foraging efficiency is more strongly affected by other factors such as the timing to find the first prey item which was found to occur faster in the presence of large devices. The development of this theoretical model
International Nuclear Information System (INIS)
Bravo, Ivan; Marston, George; Nutt, David R.; Shine, Keith P.
2011-01-01
Integrated infrared cross-sections and wavenumber positions for the vibrational modes of a range of hydrofluoroethers (HFEs) and hydrofluoropolyethers (HFPEs) have been calculated. Spectra were determined using a density functional method with an empirically derived correction for the wavenumbers of band positions. Radiative efficiencies (REs) were determined using the Pinnock et al. method and were used with atmospheric lifetimes from the literature to determine global warming potentials (GWPs). For the HFEs and the majority of the molecules in the HG series HFPEs, theoretically determined absorption cross-sections and REs lie within ca. 10% of those determined using measured spectra. For the larger molecules in the HG series and the HG' series of HFPEs, agreement is less good, with theoretical values for the integrated cross-sections being up to 35% higher than the experimental values; REs are up to 45% higher. Our method gives better results than previous theoretical approaches, because of the level of theory chosen and, for REs, because an empirical wavenumber correction derived for perfluorocarbons is effective in predicting the positions of C-F stretching frequencies at around 1250 cm -1 for the molecules considered here.
White Certificates for energy efficiency improvement with energy taxes: A theoretical economic model
International Nuclear Information System (INIS)
Oikonomou, Vlasis; Jepma, Catrinus; Becchis, Franco; Russolillo, Daniele
2008-01-01
In this paper we analyze interactions of two energy policy instruments, namely a White Certificates (WhC) scheme as an innovative policy instrument for energy efficiency improvement and energy taxation. These policy instruments differ in terms of objectives and final impacts on the price of electricity. We examine the effect of these policy instruments in the electricity sector, focusing on electricity producers and suppliers in a competitive market. Using microeconomic theory, we identify synergies between market players and demonstrate the total effect on the electricity price when suppliers internalize the behaviour of producers in their decisions. This model refers to an ideal market situation of full liberalization. The cases we examine consist of electricity producers with and without a carbon tax, electricity suppliers with and without an electricity tax, and with WhC obligations. Furthermore, we present a parallel implementation of WhC for electricity suppliers with carbon tax on electricity producers and an electricity tax with WhC obligations to electricity suppliers. We demonstrate differences in optimization behaviour of producers and suppliers. Based on a couple of cases of WhC with carbon and electricity taxes, various positive and negative effects of both schemes in terms of target achievement and efficiency are present, which can lead to an added value of such schemes in the policy mix, although uncertainties of outcomes are quite high. A basic finding is that in a merit order several parameters can increase final electricity price after the implementation of different policies: demand for electricity and electricity supply cost at a large scale and then follow the level of level of obligation for energy saving, level of penalty, and price of WhC (representing the marginal costs of energy saving projects). The impact magnitude of parameters depends on the values chosen and on the initial position of suppliers (i.e. if their actual behaviour deviates
Ren, Xin-Yao; Wu, Yong; Wang, Li; Zhao, Liang; Zhang, Min; Geng, Yun; Su, Zhong-Min
2014-06-01
A density functional theory/time-depended density functional theory was used to investigate the synthesized guanidinate-based iridium(III) complex [(ppy)2Ir{(N(i)Pr)2C(NPh2)}] (1) and two designed derivatives (2 and 3) to determine the influences of different cyclometalated ligands on photophysical properties. Except the conventional discussions on geometric relaxations, absorption and emission properties, many relevant parameters, including spin-orbital coupling (SOC) matrix elements, zero-field-splitting parameters, radiative rate constants (kr) and so on were quantitatively evaluated. The results reveal that the replacement of the pyridine ring in the 2-phenylpyridine ligand with different diazole rings cannot only enlarge the frontier molecular orbital energy gaps, resulting in a blue-shift of the absorption spectra for 2 and 3, but also enhance the absorption intensity of 3 in the lower-energy region. Furthermore, it is intriguing to note that the photoluminescence quantum efficiency (ΦPL) of 3 is significantly higher than that of 1. This can be explained by its large SOC value(n=3-4) and large transition electric dipole moment (μS3), which could significantly contribute to a larger kr. Besides, compared with 1, the higher emitting energy (ET1) and smaller (2) value for 3 may lead to a smaller non-radiative decay rate. Additionally, the detailed results also indicate that compared to 1 with pyridine ring, 3 with imidazole ring performs a better hole injection ability. Therefore, the designed complex 3 can be expected as a promising candidate for highly efficient guanidinate-based phosphorescence emitter for OLEDs applications. Copyright © 2014 Elsevier Inc. All rights reserved.
The Efficiency of a Hybrid Flapping Wing Structure—A Theoretical Model Experimentally Verified
Directory of Open Access Journals (Sweden)
Yuval Keren
2016-07-01
Full Text Available To propel a lightweight structure, a hybrid wing structure was designed; the wing’s geometry resembled a rotor blade, and its flexibility resembled an insect’s flapping wing. The wing was designed to be flexible in twist and spanwise rigid, thus maintaining the aeroelastic advantages of a flexible wing. The use of a relatively “thick” airfoil enabled the achievement of higher strength to weight ratio by increasing the wing’s moment of inertia. The optimal design was based on a simplified quasi-steady inviscid mathematical model that approximately resembles the aerodynamic and inertial behavior of the flapping wing. A flapping mechanism that imitates the insects’ flapping pattern was designed and manufactured, and a set of experiments for various parameters was performed. The simplified analytical model was updated according to the tests results, compensating for the viscid increase of drag and decrease of lift, that were neglected in the simplified calculations. The propelling efficiency of the hovering wing at various design parameters was calculated using the updated model. It was further validated by testing a smaller wing flapping at a higher frequency. Good and consistent test results were obtained in line with the updated model, yielding a simple, yet accurate tool, for flapping wings design.
Theoretical evidence of PtSn alloy efficiency for CO oxidation.
Dupont, Céline; Jugnet, Yvette; Loffreda, David
2006-07-19
The efficiency of PtSn alloy surfaces toward CO oxidation is demonstrated from first-principles theory. Oxidation kinetics based on atomistic density-functional theory calculations shows that the Pt3Sn surface alloy exhibits a promising catalytic activity for fuel cells. At room temperature, the corresponding rate outstrips the activity of Pt(111) by several orders of magnitude. According to the oxidation pathways, the activation barriers are actually lower on Pt3Sn(111) and Pt3Sn/Pt(111) surfaces than on Pt(111). A generalization of Hammer's model is proposed to elucidate the key role of tin on the lowering of the barriers. Among the energy contributions, a correlation is evidenced between the decrease of the barrier and the strengthening of the attractive interaction energy between CO and O moieties. The presence of tin modifies also the symmetry of the transition states which are composed of a CO adsorbate on a Pt near-top position and an atomic O adsorption on an asymmetric mixed PtSn bridge site. Along the reaction pathways, a CO2 chemisorbed surface intermediate is obtained on all the surfaces. These results are supported by a thorough vibrational analysis including the coupling with the surface phonons which reveals the existence of a stretching frequency between the metal substrate and the CO2 molecule.
International Nuclear Information System (INIS)
Rognon, F.
2005-06-01
This comprehensive report for the Swiss Federal Office of Energy (SFOE) takes a look at how the efficiency potential of heat pumps together with combined heat and power systems can help provide a maximum reduction of CO 2 emissions and provide electricity generation from fossil fuel in Switzerland together with reductions in CO 2 emissions. In Switzerland, approximately 80% of the low-temperature heat required for space-heating and for the heating-up of hot water is produced by burning combustibles. Around a million gas and oil boilers were in use in Switzerland in 2000, and these accounted for approximately half the country's 41.1 million tonnes of CO 2 emissions. The authors state that there is a more efficient solution with lower CO 2 emissions: the heat pump. With the enormous potential of our environment it would be possible to replace half the total number of boilers in use today with heat pumps. This would be equivalent to 90 PJ p.a. of useful heat, or 500,000 systems. The power source for heat pumps should come from the substitution of electric heating systems (electric resistor-based systems) and from the replacement of boilers. This should be done by using combined heat and power systems with full heat utilisation. This means, according to the authors, that the entire required power source can be provided without the need to construct new electricity production plants. The paper examines and discusses the theoretical, technical, market and realisable potentials
Zhao, Caibin; Jin, Lingxia; Ge, Hongguang; Guo, Xiaohua; Zhang, Qiang; Wang, Wenliang
2018-02-01
In this work, to develop efficient organic dye sensitisers, a series of novel donor-acceptor-π-acceptor metal-free dyes were designed based on the C217 dye by means of modifying different auxiliary acceptors, and their photovoltaic performances were theoretically investigated with systematic density functional theory calculations coupled with the incoherent charge-hopping model. Results showed that the designed dyes possess lower highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) levels as well as narrower HOMO-LUMO gaps compared to C217, which indicate their higher light-harvesting efficiency. In addition, using the (TiO2)38 cluster and bidentate bridging model, we predicted that the photoelectric conversion efficiency (PCE) for the C217 dye is as high as 9.92% under air mass (AM) 1.5 illumination (100 mW.cm-2), which is in good agreement with its experimental value (9.60%-9.90%). More interestingly, the cell sensitised by the dye 7 designed in this work exhibits a middle-sized open-circuit voltage of 0.737 V, large short-circuit photocurrent density of 21.16 mAˑcm-2 and a fill factor of 0.801, corresponding to a quite high PCE of 12.49%, denoting the dye 7 is a more promising sensitiser candidate than the C217, and is worth further experimental study.
International Nuclear Information System (INIS)
Palfalvi, J.
1983-04-01
The neutron sensitivity of Kodak-Pathe LR 115 II type cellulose nitrate track detectors with different (n,α) radiators was investigated by calculations and measurements. The α counting efficiency using an optical microscope is 95% for α particles with maximum energy of 2 MeV. When using an image analyzer the etched through-tracks (holes) with diameters greater than 2 μm are counted. The efficiency then depends only on the original and removed layer thickness but not on the etching temperature within the range of 40 to 60 deg C and the 2.5 to 6 N normality of the NaOH etchant. Efficiency varies from about 3 to 20% for alphas from the 6 Li/n, +a/T reaction if the removed layer lies in the range of 7 to 10 μm, and varies from 2 to 10% for 10 B/n, α/ 7 Li reaction alphas when the layer re--moval is 8 to 10 μm. (author)
International Nuclear Information System (INIS)
Ganjefar, Soheil; Ghassemi, Ali Akbar; Ahmadi, Mohamad Mehdi
2014-01-01
In this paper, a quantum neural network (QNN) is used as controller in the adaptive control structures to improve efficiency of the maximum power point tracking (MPPT) methods in the wind turbine system. For this purpose, direct and indirect adaptive control structures equipped with QNN are used in tip-speed ratio (TSR) and optimum torque (OT) MPPT methods. The proposed control schemes are evaluated through a battery-charging windmill system equipped with PMSG (permanent magnet synchronous generator) at a random wind speed to demonstrate transcendence of their effectiveness as compared to PID controller and conventional neural network controller (CNNC). - Highlights: • Using a new control method to harvest the maximum power from wind energy system. • Using an adaptive control scheme based on quantum neural network (QNN). • Improving of MPPT-TSR method by direct adaptive control scheme based on QNN. • Improving of MPPT-OT method by indirect adaptive control scheme based on QNN. • Using a windmill system based on PMSG to evaluate proposed control schemes
Directory of Open Access Journals (Sweden)
J. Pablo Arroyo-Mora
2018-04-01
Full Text Available Peatlands cover a large area in Canada and globally (12% and 3% of the landmass, respectively. These ecosystems play an important role in climate regulation through the sequestration of carbon dioxide from, and the release of methane to, the atmosphere. Monitoring approaches, required to understand the response of peatlands to climate change at large spatial scales, are challenged by their unique vegetation characteristics, intrinsic hydrological complexity, and rapid changes over short periods of time (e.g., seasonality. In this study, we demonstrate the use of multitemporal, high spatial resolution (1 m2 hyperspectral airborne imagery (Compact Airborne Spectrographic Imager (CASI and Shortwave Airborne Spectrographic Imager (SASI sensors for assessing maximum instantaneous gross photosynthesis (PGmax in hummocks, and gravimetric water content (GWC and carbon uptake efficiency in hollows, at the Mer Bleue ombrotrophic bog. We applied empirical models (i.e., in situ data and spectral indices and we derived spatial and temporal trends for the aforementioned variables. Our findings revealed the distribution of hummocks (51.2%, hollows (12.7%, and tree cover (33.6%, which is the first high spatial resolution map of this nature at Mer Bleue. For hummocks, we found growing season PGmax values between 8 μmol m−2 s−1 and 12 μmol m−2 s−1 were predominant (86.3% of the total area. For hollows, our results revealed, for the first time, the spatial heterogeneity and seasonal trends for gravimetric water content and carbon uptake efficiency for the whole bog.
Energy Technology Data Exchange (ETDEWEB)
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
Faghihi, Faramarz; Kolodziejski, Christoph; Fiala, André; Wörgötter, Florentin; Tetzlaff, Christian
2013-12-20
Fruit flies (Drosophila melanogaster) rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the fly's olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons (Kenyon cells) was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the system efficiency will be substantially reduced.
Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation
Rajiv D. Banker
1993-01-01
This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...
Zheng, T.; Chen, J. M.
2016-12-01
The maximum carboxylation rate (Vcmax), despite its importance in terrestrial carbon cycle modelling, remains challenging to obtain for large scales. In this study, an attempt has been made to invert the Vcmax using the gross primary productivity from sunlit leaves (GPPsun) with the physiological basis that the photosynthesis rate for leaves exposed to high solar radiation is mainly determined by the Vcmax. Since the GPPsun can be calculated through the sunlit light use efficiency (ɛsun), the main focus becomes the acquisition of ɛsun. Previous studies using site level reflectance observations have shown the ability of the photochemical reflectance ratio (PRR, defined as the ratio between the reflectance from an effective band centered around 531nm and a reference band) in tracking the variation of ɛsun for an evergreen coniferous stand and a deciduous broadleaf stand separately and the potential of a NDVI corrected PRR (NPRR, defined as the product of NDVI and PRR) in producing a general expression to describe the NPRR-ɛsun relationship across different plant function types. In this study, a significant correlation (R2 = 0.67, p<0.001) between the MODIS derived NPRR and the site level ɛsun calculated using flux data for four Canadian flux sites has been found for the year 2010. For validation purpose, the ɛsun in 2009 for the same sites are calculated using the MODIS NPRR and the expression from 2010. The MODIS derived ɛsun matches well with the flux calculated ɛsun (R2 = 0.57, p<0.001). Same expression has then been applied over a 217 × 193 km area in Saskatchewan, Canada to obtain the ɛsun and thus GPPsun for the region during the growing season in 2008 (day 150 to day 260). The Vcmax for the region is inverted using the GPPsun and the result is validated at three flux sites inside the area. The results show that the approach is able to obtain good estimations of Vcmax values with R2 = 0.68 and RMSE = 8.8 μmol m-2 s-1.
Maximum entropy decomposition of quadrupole mass spectra
International Nuclear Information System (INIS)
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
Guynn, Mark D.
2015-01-01
There are many trade-offs in aircraft design that ultimately impact the overall performance and characteristics of the final design. One well recognized and well understood trade-off is that of wing weight and aerodynamic efficiency. Higher aerodynamic efficiency can be obtained by increasing wing span, usually at the expense of higher wing weight. The proper balance of these two competing factors depends on the objectives of the design. For example, aerodynamic efficiency is preeminent for sailplanes and long slender wings result. Although the wing weight-drag trade is universally recognized, aerodynamic efficiency and structural efficiency are not usually considered in combination. This paper discusses the concept of "aero-structural efficiency," which combines weight and drag characteristics. A metric to quantify aero-structural efficiency, termed effective L/D, is then derived and tested with various scenarios. Effective L/D is found to be a practical and robust means to simultaneously characterize aerodynamic and structural efficiency in the context of aircraft design. The primary value of the effective L/D metric is as a means to better communicate the combined system level impacts of drag and structural weight.
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Energy Technology Data Exchange (ETDEWEB)
Baudrit, Mathieu; Algora, Carlos [Instituto de Energia Solar, Universidad Politecnica de Madrid (Spain)
2010-02-15
A theoretical conversion efficiency of 36.4% at 1000 suns concentration has been determined by means of realistic models and an improved optimization routine. The starting point device was the recent world-record monolithic GaInP/GaAs dual-junction solar cell that was grown lattice matched on a GaAs substrate by MOVPE, which has an efficiency of 32.6% at 1000 suns. Using previously calibrated models developed at our institution, IES-UPM, together with Silvaco ATLAS TCAD software, we reproduced the characteristics of the world-record solar cell, and then determined a cell configuration that would yield greater efficiency by using an optimization routine to hone the doping concentration and the thickness of each layer. (Abstract Copyright [2010], Wiley Periodicals, Inc.)
Energy Technology Data Exchange (ETDEWEB)
Morillon Galvez, David [Comision Nacional para el Ahorro de Energia, Mexico, D. F. (Mexico)
1999-07-01
An analysis of the elements and factors that the architecture of buildings must have to be sustainable, such as: a design adequate to the environment, saving and efficient use of alternate energies, and the auto-supply is presented. In addition a methodology for the natural air conditioning (bioclimatic architecture) of buildings, as well as ideas for the saving and efficient use of energy, with the objective of contributing to the adequate use of components of the building (walls, ceilings, floors etc.), is presented, that when interacting with the environment it takes advantage of it, without deterioration of the same, obtaining energy efficient designs. [Spanish] Se presenta un analisis de los elementos y factores que debe tener la arquitectura de edificios para ser sustentable, como; un diseno adecuado al ambiente, ahorro y uso eficiente de la energia, el uso de energias alternas y el autoabastecimiento. Ademas se propone una metodologia para la climatizacion natural (arquitectura bioclimatica) de edificios, asi como ideas para el ahorro y uso eficiente de energia, con el objetivo de aportar al uso adecuado de componentes del edificio (muros, techos, pisos etc.) que al interactuar con el ambiente tome ventaja de el, sin deterioro del mismo, logrando disenos energeticamente eficientes.
International Nuclear Information System (INIS)
Berrichon, J.D.; Louahlia-Gualous, H.; Bandelier, Ph.; Bariteau, N.
2014-01-01
Highlights: • Theoretical model for condensation heat transfer at very low pressure is developed using only one iterative loop. • Experimental results on steam and air steam condensation heat transfer at very low pressure are presented. • The developed model gives the good predictions for local condensation heat transfer at low pressure. • A maximal deterioration of 50% in condensation heat transfer is obtained at low pressure for air fraction of 4%. • A new correlation including effect of a wavy film surface for steam condensation at low pressure is suggested. - Abstract: This paper presents experimental investigation on the influence of very low pressure on local and average condensation heat transfer in a vertical tube. Furthermore, this paper develops an analytical study for film condensation heat transfer coefficient in the presence of non-condensable gas inside a vertical tube. The condensate film thickness is calculated for each location in a tube using mass and heat transfer analogy. The effects of interfacial shear stress and waves on condensate film surface are included in the model. The comparative studies show that the present model well predicts the experimental data of Khun et al. [1]for local condensation of steam air mixture at high pressure. Different correlations defined for condensation heat transfer are evaluated. It is found that the correlations of Cavallini and Zecchin [2] and Shah [3] are the closest to the calculated steam condensation local heat transfer coefficient. The model gives a satisfactory accuracy with the experimental results for condensation heat transfer at very low pressure. The mean deviation between the predictions of the theoretical model with the measurements for pure saturated vapor is 12%. Experimental data show that the increase of air fraction to 4% deteriorates condensation heat transfer at low pressure up to 50%
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2004-01-01
Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)
Ganguly, Gaurab; Sultana, Munia; Paul, Ankan
2018-01-18
Molecular solar thermal storage (MOST) systems have been largely limited to three classes of molecular motifs: azo-benzene, norbornadiene, and transition metal based fulvalene-tetracarbonyl systems. Photodimerization of anthracene has been known for a century; however, this photoprocess has not been successfully exploited for MOST purposes due to its poor energy storage. Using well-calibrated theoretical methods on a series of [n.n](9,10)bis-anthracene cyclophanes, we have exposed that they can store solar energy into chemical bonds and can release in the form of heat energy on demand under mild conditions. The storage is mainly attributed to the strain in the rings formed by the alkyl linkers upon photoexcitation. Our results demonstrate that the gravimetric energy storage density for longer alkyl-chain linkers (n > 3) are comparable to those for the best-known candidates; however, it lacks some of the deleterious attributes of known systems, thus making the proposed molecules desirable targets for MOST applications.
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
International Nuclear Information System (INIS)
Piyush Sabharwall; Fred Gunnerson; Akira Tokuhiro; Vivek Utgiker; Kevan Weaver; Steven Sherman
2007-01-01
The work reported here is the preliminary analysis of two-phase Thermosyphon heat transfer performance with various alkali metals. Thermosyphon is a device for transporting heat from one point to another with quite extraordinary properties. Heat transport occurs via evaporation and condensation, and the heat transport fluid is re-circulated by gravitational force. With this mode of heat transfer, the thermosyphon has the capability to transport heat at high rates over appreciable distances, virtually isothermally and without any requirement for external pumping devices. For process heat, intermediate heat exchangers (IHX) are required to transfer heat from the NGNP to the hydrogen plant in the most efficient way possible. The production of power at higher efficiency using Brayton Cycle, and hydrogen production requires both heat at higher temperatures (up to 1000 C) and high effectiveness compact heat exchangers to transfer heat to either the power or process cycle. The purpose for selecting a compact heat exchanger is to maximize the heat transfer surface area per volume of heat exchanger; this has the benefit of reducing heat exchanger size and heat losses. The IHX design requirements are governed by the allowable temperature drop between the outlet of the NGNP (900 C, based on the current capabilities of NGNP), and the temperatures in the hydrogen production plant. Spiral Heat Exchangers (SHE's) have superior heat transfer characteristics, and are less susceptible to fouling. Further, heat losses to surroundings are minimized because of its compact configuration. SHEs have never been examined for phase-change heat transfer applications. The research presented provides useful information for thermosyphon design and Spiral Heat Exchanger
Shoyama, Taiji; Yoshioka, Yoshio
To improve the NO removal performance in silent discharge process, we investigated the influence of the physical parameters such as current density, channel radius and pulse duration of the one micro discharge under the constant reduced electric field strength. And influence of the micro discharges occurrence locations were also discussed. In order to analyze the NO removal process, we assumed that the pulse micro discharges occur repeatedly at the same location in static gas and that the chemical reactions induced by micro discharge forms many radicals, which react with pollutants and by-products. The conclusions we obtained are that lower current density, smaller discharge radius and shorter discharge duration improve NO removal efficiency. These results also mean that the lower discharge energy of the one micro discharge and the larger number of parallel micro discharges increase the NO removal performance. Therefore, to make the area of one micro discharge small is a desirable way to improve the NO removal performance. So we think that the glow like discharge might be more effective than the streamer like discharge mode. Next, using the two-dimensional model, which considered the influence of gas flow, we obtained a conclusion that the repeated micro discharges at different positions are very effective to increase the De-NOx performance. The reason is that the reaction of NO2+O→NO+O2 and ozone dissociation reactions are suppressed by the movement of the location of micro discharges.
Shih, Ko-Han; Chang, Yin-Jung
2018-01-01
Solar energy conversion via internal photoemission (IPE) across a planar p-type Schottky junction is quantified for aluminum (Al) and copper (Cu) in the framework of direct transitions with non-constant matrix elements. Transition probabilities and k-resolved group velocities are obtained based on pseudo-wavefunction expansions and realistic band structures using the pseudopotential method. The k-resolved number of direct transitions, hole photocurrent density, quantum yield (QY), and the power conversion efficiency (PCE) under AM1.5G solar irradiance are subsequently calculated and analyzed. For Al, the parabolic and "parallel-band" effect along the U-W-K path significantly enhances the transition rate with final energies of holes mainly within 1.41 eV below the Fermi energy. For Cu, d-state hot holes mostly generated near the upper edge of 3d bands dominate the hole photocurrent and are weekly (strongly) dependent on the barrier height (metal film thickness). Hot holes produced in the 4s band behave just oppositely to their d-state counterparts. Non-constant matrix elements are shown to be necessary for calculations of transitions due to time-harmonic perturbation in Cu. Compared with Cu, Al-based IPE in p-type Schottky shows the highest PCE (QY) up to about 0.2673% (5.2410%) at ΦB = 0.95 eV (0.5 eV) and a film thickness of 11 nm (20 nm). It is predicted that metals with relatively dispersionless d bands (such as Cu) in most cases do not outperform metals with photon-accessible parallel bands (such as Al) in photon energy conversion using a planar p-type Schottky junction.
Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J
2015-06-01
Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct
International Nuclear Information System (INIS)
Kiran Kumar, J.K.; Sharma, S.; Chakraborty, D.; Singh, B.; Bhattacharaya, A.; Mittal, B.R.; Gayana, S.
2010-01-01
Full text: Generator is constructed on the principle of decay growth relationship between a long lived parent radionuclide and short lived daughter radionuclide. Difference in chemical properties of daughter and parent radionuclide helps in efficient separation of the two radionuclides. Aim and Objectives: The present study was designed to calculate the elution efficiency of the generator using the traditional formula based method and free web based software method. Materials and Methods: 99 Mo/ 99m Tc MON.TEK (Monrol, Gebze) generator and sterile 0.9% NaCl vial and vacuum vial in the lead shield were used for the elution. A new 99 Mo/ 99m Tc generator (calibrated activity 30GBq) calibrated for thursday was received on monday morning in our department. Generator was placed behind lead bricks in fume hood. The rubber plugs of both vacuum and 0.9% NaCl vial were wiped with 70% isopropyl alcohol swabs. Vacuum vial placed inside the lead shield was inserted in the vacuum position simultaneously 10 ml NaCl vial was inserted in the second slot. After 1-2 min vacuum vial was removed without moving the emptied 0.9%NaCl vial. The vacuum slot was covered with another sterile vial to maintain sterility. The RAC was measured in the calibrated dose calibrator (Capintec, 15 CRC). The elution efficiency was calculated theoretically and using free web based software (Apache Web server (www.apache.org) and PHP (www.php.net). Web site of the Italian Association of Nuclear Medicine and Molecular Imaging (www.aimn.it). Results: The mean elution efficiency calculated by theoretical method was 93.95% +0.61. The mean elution efficiency as calculated by the software was 92.85% + 0.89. There was no statistical difference in both the methods. Conclusion: The free web based software provides precise and reproducible results and thus saves time and mathematical calculation steps. This enables a rational use of available activity and also enabling a selection of the type and number of
Dragu, Sebastian - Mihai
2014-01-01
Nowadays, the use of Internet and smart technology on a daily basis is not just for being faster and more efficient in communication. It became a way of living that changed the way people think, read, play, shop, spend free time, meet people etc. Having many choices and greater access to a large online information pool, one became diligent researchers, always considering what a good investment is. Since there are many different products offering more or less the same functional benefits, a de...
Yang, Lei; Lindblad, Rebecka; Gabrielsson, Erik; Boschloo, Gerrit; Rensmo, Håkan; Sun, Licheng; Hagfeldt, Anders; Edvinsson, Tomas; Johansson, Erik M J
2018-04-11
4- tert-Butylpyridine ( t-BP) is commonly used in solid state dye-sensitized solar cells (ssDSSCs) to increase the photovoltaic performance. In this report, the mechanism how t-BP functions as a favorable additive is investigated comprehensively. ssDSSCs were prepared with different concentrations of t-BP, and a clear increase in efficiency was observed up to a maximum concentration and for higher concentrations the efficiency thereafter decreases. The energy level alignment in the complete devices was measured using hard X-ray photoelectron spectroscopy (HAXPES). The results show that the energy levels of titanium dioxide are shifted further away from the energy levels of spiro-OMeTAD as the t-BP concentration is increased. This explains the higher photovoltage obtained in the devices with higher t-BP concentration. In addition, the electron lifetime was measured for the devices and the electron lifetime was increased when adding t-BP, which can be explained by the recombination blocking effect at the surface of TiO 2 . The results from the HAXPES measurements agree with those obtained from density functional theory calculations and give an understanding of the mechanism for the improvement, which is an important step for the future development of solar cells including t-BP.
International Nuclear Information System (INIS)
Mazzanti, Massimiliano; Zoboli, Roberto
2009-01-01
In this paper we test an adapted EKC hypothesis to verify the relationship between 'environmental efficiency' (namely emissions per unit of value added) and labour productivity (value added per employee). We exploit NAMEA data on Italy for 29 sector branches and 6 categories of air emissions for the period 1991-2001. We employ data on capital stock and trade openness to test the robustness of our results. On the basis of the theoretical and empirical analyses focusing on innovation, firm performances and environmental externalities, we would expect a positive correlation between environmental efficiency and labour productivity - a negative correlation between the emissions intensity of value added and labour productivity - which departs from the conventional mainstream view. The hypothesis tested is a critical one within the longstanding debate on the potential trade-off or complementarity between environmental preservation and economic performance, which is strictly associated with the role of technological innovation. We find that for most air emission categories there is a positive relationship between labour productivity and environmental efficiency. Labour productivity dynamics, then, seem to be complementary to a decreasing emissions intensity in the production process. Taking a disaggregate sector perspective, we show that the macro-aggregate evidence is driven by sector dynamics in a non-homogenous way across pollutants. Services tend always to show a 'complementary' relationship, while industry seems to be associated with inverted U-shape dynamics for greenhouse gases and nitrogen oxides. This is in line with our expectations. In any case, EKC shapes appear to drive such productivity links towards complementarity. The extent to which this evidence derives from endogenous market forces, industrial and structural change, and policy effects is discussed by taking an evolutionary perspective to innovation and by referring to impure public goods arguments
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Directory of Open Access Journals (Sweden)
Volodymyr Bazyliuk
2016-11-01
Full Text Available The purpose of the paper is the theoretical study and the analysis of the basic methodological approaches to assess the effectiveness of the transformation of key business processes in the PPA (publishing and printing activity in the region in order to choose the best option. Methodology. The overview of the main assessment methods of the effectiveness of business processes: EVA (Economic value added; ABC (Activity-based costing; Tableau of bord and BSC (Balanced Scorecard is provided. In order to ensure the formalization of the intergrated assessment of the effectiveness of the business process in the publishing and printing activities in the region it is suggested to apply to the methodological apparatus of the fuzzy sets. Statistical analysis, comparison and synthesis are necessary to study the efficiency of the transformation of the key business processes in the PPA in the region. Results. The review and analysis of the most common methods for evaluating the effectiveness of the transformation of key business processes were conducted; the basic advantages and disadvantages of each of the proposed methods in the light of PPA were studied. It was proved that a single business process involves the use of a scorecard that is specific and peculiar for it only whereas completeness of its analysis depends on the kind of the business process: basic, developmental, managing or providing one. The approach to the formalization of the integrated assememnt of the effectiveness of business process in PPA in the region, based on the theory of fuzzy sets was formulated. Practical significance. The mathematical formulation of the problem, an integrated assessment of the efficiency of the business process for each of the possible options for its implementation was developed, and the algorithm of assessing the effectiveness of the business process in the PPA in the region was generated by the apparatus of fuzzy sets. Value/originality. Implementing the
Energy Technology Data Exchange (ETDEWEB)
Hernández-Salcedo, P.G.; Amézaga-Madrid, P., E-mail: patricia.amezaga@cimav.edu.mx; Monárrez-Cordero, B.E.; Antúnez-Flores, W.; Pizá-Ruiz, P.; Leyva-Porras, C.; Ornelas-Gutiérrez, C.; Miki-Yoshida, M.
2015-09-15
The development and optimization of methodologies to generate magnetite nanoparticles is currently an innovation topic. For a desired application such as arsenic removal from waste water, the generation of these nanostructures with specific microstructural properties is determinant. Therefore, it is necessary to understand the phenomenon during the nanoparticles formation process. Thus, in this work it is reported the influence of synthesis parameters of AACVD technique on the formation of magnetite nanoparticles. Parameters were according to: (1) synthesis temperature, (2) tubular reactor diameter, (3) concentration of the precursor solution and type of solvent, (4) carrier gas flow and (5) solvent type in the collection process. The effect of these synthesis parameters on the morphology, size and microstructure are discussed in detail and related with the mechanism of formation of the particles. Theoretical simulations were performed on two of these parameters (1 and 4). The microstructure and surface morphology of the different nanostructures obtained were characterized by field emission scanning electron and transmission electron microscopy. Subsequently two materials, were selected for further microstructural analysis. Finally, to determine the removal efficiency in the two materials the arsenic adsorption was evaluated. A major contribution of this work was the calculation of the number of spherical particles formed from a single drop of precursor solution. This calculation matched with the value found experimentally.
Namuangruk, Supawadee; Sirithip, Kanokkorn; Rattanatwan, Rattanawelee; Keawin, Tinnagon; Kungwan, Nawee; Sudyodsuk, Taweesak; Promarak, Vinich; Surakhot, Yaowarat; Jungsuttiwong, Siriporn
2014-06-28
The charge transfer effect of different meso-substituted linkages on porphyrin analogue 1 (A1, B1 and C1) was theoretically investigated using density functional theory (DFT) and time-dependent DFT (TDDFT) calculations. The calculated geometry parameters and natural bond orbital analysis reveal that the twisted conformation between porphyrin macrocycle and meso-substituted linkages leads to blocking of the conjugation of the conjugated backbone, and the frontier molecular orbital plot shows that the intramolecular charge transfer of A1, B1 and C1 hardly takes place. In an attempt to improve the photoinduced intramolecular charge transfer ability of the meso-linked zinc porphyrin sensitizer, a strong electron-withdrawing group (CN) was introduced into the anchoring group of analogue 1 forming analogue 2 (A2, B2 and C2). The density difference plot of A2, B2 and C2 shows that the charge transfer properties dramatically improved. The electron injection process has been performed using TDDFT; the direct charge-transfer transition in the A2-(TiO2)38 interacting system takes place; our results strongly indicated that introducing electron-withdrawing groups into the acceptor part of porphyrin dyes can fine-tune the effective conjugation length of the π-spacer and improve intramolecular charge transfer properties, consequently inducing the electron injection process from the anchoring group of the porphyrin dye to the (TiO2)38 surface which may improve the conversion efficiency of the DSSCs. Our calculated results can provide valuable information and a promising outlook for computation-aided sensitizer design with anticipated good properties in further experimental synthesis.
Energy Technology Data Exchange (ETDEWEB)
Sefkow, Adam B.; Bennett, Guy R.
2010-09-01
Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: as the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.
International Nuclear Information System (INIS)
Anderson, D.C.
1994-11-01
Activities associated with oil and gas development under the Maximum Efficiency Rate (MER) from 1975 to 2025 will disturb approximately 3,354 acres. Based on 1976 aerial photographs and using a dot grid methodology, the amount of land disturbed prior to MER is estimated to be 3,603 acres. Disturbances on Naval Petroleum Reserve No. 1 (NPR-1) were mapped using 1988 aerial photography and a geographical information system. A total of 6,079 acres were classified as disturbed as of June, 1988. The overall objective of this document is to provide specific information relating to the on-site habitat restoration program at NPRC. The specific objectives, which relate to the terms and conditions that must be met by DOE as a means of protecting the San Joaquin kit fox from incidental take are to: (1) determine the amount and location of disturbed lands on NPR-1 and the number of acres disturbed as a result of MER activities, (2) develop a long term (10 year) program to restore an equivalent on-site acres to that lost from prior project-related actions, and (3) examine alternative means to offset kit fox habitat loss
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Kirillova, Marina V; Kuznetsov, Maxim L; Reis, Patrícia M; da Silva, José A L; da Silva, João J R Fraústo; Pombeiro, Armando J L
2007-08-29
Vanadium(IV or V) complexes with N,O- or O,O-ligands, i.e., [VO{N(CH2CH2O)3}], Ca[V(HIDPA)2] (synthetic amavadine), Ca[V(HIDA)2], or [Bu4N]2[V(HIDA)2] [HIDPA, HIDA = basic form of 2,2'-(hydroxyimino)dipropionic or -diacetic acid, respectively], [VO(CF3SO3)2], Ba[VO(nta)(H2O)]2 (nta = nitrilotriacetate), [VO(ada)(H2O)] (ada = N-2-acetamidoiminodiacetate), [VO(Hheida)(H2O)] (Hheida = 2-hydroxyethyliminodiacetate), [VO(bicine)] [bicine = basic form of N,N-bis(2-hydroxyethyl)glycine], and [VO(dipic)(OCH2CH3)] (dipic = pyridine-2,6-dicarboxylate), are catalyst precursors for the efficient single-pot conversion of methane into acetic acid, in trifluoroacetic acid (TFA) under moderate conditions, using peroxodisulfate as oxidant. Effects on the yields and TONs of various factors are reported. TFA acts as a carbonylating agent and CO is an inhibitor for some systems, although for others there is an optimum CO pressure. The most effective catalysts (as amavadine) bear triethanolaminate or (hydroxyimino)dicarboxylates and lead, in a single batch, to CH3COOH yields > 50% (based on CH4) or remarkably high TONs up to 5.6 x 103. The catalyst can remain active upon multiple recycling of its solution. Carboxylation proceeds via free radical mechanisms (CH3* can be trapped by CBrCl3), and theoretical calculations disclose a particularly favorable process involving the sequential formation of CH3*, CH3CO*, and CH3COO* which, upon H-abstraction (from TFA or CH4), yields acetic acid. The CH3COO* radical is formed by oxygenation of CH3CO* by a peroxo-V complex via a V{eta1-OOC(O)CH3} intermediate. Less favorable processes involve the oxidation of CH3CO* by the protonated (hydroperoxo) form of that peroxo-V complex or by peroxodisulfate. The calculations also indicate that (i) peroxodisulfate behaves as a source of sulfate radicals which are methane H-abstractors, as a peroxidative and oxidizing agent for vanadium, and as an oxidizing and coupling agent for CH3CO* and that (ii) TFA is
Maximum Work of Free-Piston Stirling Engine Generators
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
International Nuclear Information System (INIS)
Schall, Dominik L.; Wolf, Menas; Mohnen, Alwine
2016-01-01
Increasing energy efficiency is a cornerstone of policy initiatives to tackle climate change and increase corporate sustainability. Convincing people to drive more fuel-efficiently (“eco-driving”) is often an integral part of these approaches, especially in the transport sector. But there is a lack of studies on the long-term persistence and potential interaction of the effects of incentives and training on energy conservation behavior in general and eco-driving behavior in particular. We address this gap with a twelve months long natural field experiment in a logistics company to analyze the time-dependent and potentially interacting effects of rewards and theoretical training for eco-driving on fuel consumption in a real-world setting. We find an immediate reduction of fuel consumption following the introduction of a non-monetary reward and an attenuation of this effect over time. Theoretical eco-driving training shows no effect, neither short-term nor long-term, highlighting the often neglected necessity to include practical training elements. Contrary to common assumptions, the interaction of incentives and theoretical training does not show an additional reduction effect. Our results demonstrate the difficulty of changing engrained behavior and habits, and underline the need for a careful selection and combination of interventions. Policy implications for public and private actors are discussed. - Highlights: • Natural field experiment on training and incentives for fuel-efficient driving. • Focus on long-term and interaction effects over twelve months. • Immediate reduction effect of non-monetary reward that attenuates over time. • Theoretical eco-driving training shows no effect, neither short-term nor long-term. • Interaction of incentives and training shows no additional reduction effect.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Fuel Application Efficiency in Ideal Cycle of Gas Turbine Plant with Isobaric Heat Supply
Directory of Open Access Journals (Sweden)
A. P. Nesenchuk
2013-01-01
Full Text Available The paper reveals expediency to use in prospect fuels with maximum value Qнр∑Vi and minimum theoretical burning temperature in order to obtain maximum efficiency of the ideal cycle in GTP with isobaric heat supply.
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Moiseyev, V. A.; Nazarov, V. P.; Zhuravlev, V. Y.; Zhuykov, D. A.; Kubrikov, M. V.; Klokotov, Y. N.
2016-12-01
The development of new technological equipment for the implementation of highly effective methods of recovering highly viscous oil from deep reservoirs is an important scientific and technical challenge. Thermal recovery methods are promising approaches to solving the problem. It is necessary to carry out theoretical and experimental research aimed at developing oil-well tubing (OWT) with composite heatinsulating coatings on the basis of basalt and glass fibers. We used the method of finite element analysis in Nastran software, which implements complex scientific and engineering calculations, including the calculation of the stress-strain state of mechanical systems, the solution of problems of heat transfer, the study of nonlinear static, the dynamic transient analysis of frequency characteristics, etc. As a result, we obtained a mathematical model of thermal conductivity which describes the steady-state temperature and changes in the fibrous highly porous material with the heat loss by Stefan-Boltzmann's radiation. It has been performed for the first time using the method of computer modeling in Nastran software environments. The results give grounds for further implementation of the real design of the OWT when implementing thermal methods for increasing the rates of oil production and mitigating environmental impacts.
DEFF Research Database (Denmark)
Asadi, Amin; Asadi, Meisam; Rezaniakolaei, Alireza
2018-01-01
efficiency of the nanofluid has been evaluated based on different figures of merit. It is revealed that using this nanofluid instead of the base fluid can be beneficial in all the studied solid concentrations and temperatures for both the internal laminar and turbulent flow regimes except the solid...... concentrations of 1 and 1.5% in internal turbulent flow regimes. The effect of adding nanoparticles on pumping power and convective heat transfer coefficient has also been theoretically investigated.......The main objective of the present study is to assess the heat transfer efficiency of Al2O3-MWCNT/thermal oil hybrid nanofluid over different temperatures (25–50 °C) and solid concentrations (0.125%–1.5%). To this end, first of all, the stability of the nano-oil has been studied through the Zeta...
Energy Technology Data Exchange (ETDEWEB)
Marc Vanderhaeghen
2007-04-01
The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.
Energy Technology Data Exchange (ETDEWEB)
Li, Jieqiong [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Wang, Li, E-mail: chemwangl@henu.edu.cn [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Wang, Xin [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); He, Chaozheng, E-mail: hecz2013@nynu.edu.cn [College of Physics and Electronic Engineering, Nanyang Normal University, Nanyang 473061 (China); Zhang, Jinglai, E-mail: zhangjinglai@henu.edu.cn [Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China)
2015-08-01
The phosphorescent properties of three synthesized and three new designed platinum(II) complexes are focused on in this work. To reveal their structure–property relationships, a density functional theory/time-dependent density functional theory (DFT/TDDFT) investigation is performed on the geometric and electronic structures, absorption and emission spectra. The electroluminescent (EL) properties are evaluated by the ionization potential (IP), electron affinity (EA), and reorganization energy (λ). Furthermore, the radiative rate constant (k{sub r}) is qualitatively elucidated by various factors including the strength of the SOC interaction between the higher-lying singlet excited states (S{sub n}) and the T{sub 1} state, the oscillator strength (f) of the S{sub n} states that can couple with the T{sub 1} state, and the energy separation between the coupled states. A combined analysis of various elements that could affect the phosphorescent efficiency is beneficial to exploring efficient triplet phosphors in OLEDs. Consequently, complexes Pt-1 and 1 would be more suitable blue-emitting phosphorescent materials with balance of EL properties and acceptable quantum yields. - Graphical abstract: Display Omitted - Highlights: • The absorption and phosphorescence spectra of Pt(II) complexes are investigated. • Their Φ{sub em}, IP, EA, and reorganization energy are compared. • Three new Pt(II) complexes are designed.
Calvo Hernández, A.; Roco, J. M. M.; Medina, A.
1996-06-01
Using an improved Brayton cycle as a model, a general analysis accounting for the efficiency and net power output of a gas-turbine power plant with multiple reheating and intercooling stages is presented. This analysis provides a general theoretical tool for the selection of the optimal operating conditions of the heat engine in terms of the compressor and turbine isentropic efficiencies and of the heat exchanger efficiency. Explicit results for the efficiency, net power output, optimized pressure ratios, maximum efficiency, maximum power, efficiency at maximum power, and power at maximum efficiency are given. Among others, the familiar results of the Brayton cycle (one compressor and one turbine) and of the corresponding Ericsson cycle (infinite compressors and infinite turbines) are obtained as particular cases.
Janjua, Muhammad Ramzan Saeed Ashraf
2012-11-05
This work was inspired by a previous report (Janjua et al. J. Phys. Chem. A 2009, 113, 3576-3587) in which the nonlinear-optical (NLO) response strikingly improved with an increase in the conjugation path of the ligand and the nature of hexamolybdates (polyoxometalates, POMs) was changed into a donor by altering the direction of charge transfer with a second aromatic ring. Herein, the first theoretical framework of POM-based heteroaromatic rings is found to be another class of excellent NLO materials having double heteroaromatic rings. First hyperpolarizabilities of a large number of push-pull-substituted conjugated systems with heteroaromatic rings have been calculated. The β components were computed at the density functional theory (DFT) level (BP86 geometry optimizations and LB94 time-dependent DFT). The largest β values are obtained with a donor (hexamolybdates) on the benzene ring and an acceptor (-NO(2)) on pyrrole, thiophene, and furan rings. The pyrrole imido-substituted hexamolybdate (system 1c) has a considerably large first hyperpolarizability, 339.00 × 10(-30) esu, and it is larger than that of (arylimido)hexamolybdate, calculated as 0.302 × 10(-30) esu (reference system 1), because of the double aromatic rings in the heteroaromatic imido-substituted hexamolybdates. The heteroaromatic rings act as a conjugation bridge between the electron acceptor (-NO(2)) and donor (polyanion). The introduction of an electron donor into heteroaromatic rings significantly enhances the first hyperpolarizabilities because the electron-donating ability is substantially enhanced when the electron donor is attached to the heterocyclic aromatic rings. Interposing five-membered auxiliary fragments between strong donor (polyanion) or acceptor (-NO(2)) groups results in a large computed second-order NLO response. The present investigation provides important insight into the NLO properties of (heteroaromatic) imido-substituted hexamolybdate derivatives because these compounds
Benbouguerra, Khalissa; Chafaa, Salah; Chafai, Nadjib; Mehri, Mouna; Moumeni, Ouahiba; Hellal, Abdelkader
2018-04-01
New α-aminophosphonate (α-APD) and Schiff base (E-NDPIMA) derivatives have been prepared and their structures ware proved by IR, UV-Vis, 1H, 13C and 31P NMR spectroscopy. Their inhibitive capacities on the XC48 carbon steel corrosion in 0.5 mol L-1 H2SO4 solution were explored by weight loss, Tafel polarization, electrochemical impedance spectroscopy (EIS) and atomic force microscope (AFM). Experimental results illustrate that the synthesized compounds are an effectives inhibitors and the adsorption of inhibitors molecules on the carbon steel surface obeys Langmuir adsorption isotherm. In addition, quantum chemical calculations performed with density function theory (DFT) method have been used to correlate the inhibition efficiency established experimentally. Also, the molecular dynamics simulations have been utilized to simulate the interactions between the inhibitors molecules and Fe (100) surface in aqueous solution.
Pereyra, Marcelo
2016-01-01
Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...
Joos, Georg
1986-01-01
Among the finest, most comprehensive treatments of theoretical physics ever written, this classic volume comprises a superb introduction to the main branches of the discipline and offers solid grounding for further research in a variety of fields. Students will find no better one-volume coverage of so many essential topics; moreover, since its first publication, the book has been substantially revised and updated with additional material on Bessel functions, spherical harmonics, superconductivity, elastomers, and other subjects.The first four chapters review mathematical topics needed by theo
International Nuclear Information System (INIS)
Laval, G.
1988-01-01
The 1988 progress report of the theoretical Physics Center (Ecole Polytechnique, France), is presented. The research activities are carried out in the fields of the supersymmetry theory, the dynamic systems theory, the statistical mechanics, the plasma physics and the random media. Substantial improvements are obtained on dynamical system investigations. In the field theory, the definition of the Gross-Neveu model is achieved. However the construction of the non-abelian gauge theories and the conformal theories are the main research activities. Concerning Astrophysics, a three-dimensional gravitational code is obtained. The activities of each team, and the list of the published papers, congress communications and thesis are given [fr
International Nuclear Information System (INIS)
Anon.
1980-01-01
The nuclear theory program deals with the properties of nuclei and with the reactions and interactions between nuclei and a variety of projectiles. The main areas of concentration are: heavy-ion direct reactions at nonrelativistic energies; nuclear shell theory and nuclear structure; nuclear matter and nuclear forces;intermediate-energy physics and pion-nucleus interactions; and high-energy collisions of heavy ions. Recent progress and plans for future work in these five main areas of concentration and a summary of other theoretical studies currently in progress or recently completed are presented
Directory of Open Access Journals (Sweden)
Manfred PRETIS
2012-03-01
Full Text Available Early Childhood Intervention (ECI for vulnerable children between the age of 0-3 and 6 can be seen as well established preventive service in Europe. Even though recent epidemiologic data indicate higher rates of vulnerability during childhood and adolescence, traditionally up to 6% of the children are eligible for the ECI treatment. Definitions describing the ECI include from stable or ad hoc trans-disciplinary teams helping the child, to specific professional profiles. There is a scientific consensus regarding the effects of the ECI upon the child’s development and the family dynamics. The ECI itself is responsible for more stable impact on the socio-emotional development of the child and the parent-child relationship. Specific focus in the research is given to the role of the parents as primary caregivers. Based on the importance of enhancing the interactions between the parents and the children, this paper discusses the strategies that help increase the efficiency of the ECI trough parental involvement. Special attention is dedicated to the mutual understanding, transparency and the use of common language such as the ICF.
Energy Technology Data Exchange (ETDEWEB)
Marchal, J [Diamond Light Source Ltd, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)], E-mail: julien.marchal@diamond.ac.uk
2010-01-15
A detector cascaded model is proposed to describe charge-sharing effect in single-photon counting segmented silicon detectors. Linear system theory is applied to this cascaded model in order to derive detector performance parameters such as large-area gain, presampling Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE) as a function of energy detection threshold. This theory is used to model one-dimensional detectors (i.e. strip detectors) where X-ray-generated charge can be shared between two sampling elements, but the concepts developed in this article can be generalized to two-dimensional arrays of detecting elements (i.e. pixels detectors). The zero-frequency DQE derived from this model is consistent with expressions reported in the literature using a different method. The ability of this model to simulate the effect of charge sharing on image quality in the spatial frequency domain is demonstrated by applying it to a hypothetical one-dimensional single-photon counting detector illuminated with a typical mammography spectrum.
Ban, Xinxin; Sun, Kaiyong; Sun, Yueming; Huang, Bin; Ye, Shanghui; Yang, Min; Jiang, Wei
2015-11-18
Three solution-processable exciplex-type host materials were successfully designed and characterized by equal molar blending hole transporting molecules with a newly synthesized electron transporting material, which possesses high thermal stability and good film-forming ability through a spin-coating technique. The excited-state dynamics and the structure-property relationships were systematically investigated. By gradually deepening the highest occupied molecular orbital (HOMO) level of electron-donating components, the triplet energy of exciplex hosts were increased from 2.64 to 3.10 eV. Low temperature phosphorescence spectra demonstrated that the excessively high triplet energy of exciplex would induce a serious energy leakage from the complex state to the constituting molecule. Furthermore, the low energy electromer state, which only exists under the electroexcitation, was found as another possible channel for energy loss in exciplex-based phosphorescent organic light-emitting diodes (OLEDs). In particular, as quenching of the exciplex-state and the triplet exciton were largely eliminated, solution-processed blue phosphorescence OLEDs using the exciplex-type host achieved an extremely low turn-on voltage of 2.7 eV and record-high power efficiency of 22.5 lm W(-1), which were among the highest values in the devices with identical structure.
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
International Nuclear Information System (INIS)
Li, Weifeng; Liu, Zhongchang; Wang, Zhongshu; Dou, Huili
2015-01-01
Highlights: • The specific heat ratio of the mixture increases with increasing Ar. • The thermal efficiency increases first and then decreases with increasing Ar. • Mechanisms of reducing NOx emissions are different for different dilution gases. • A suitable inert gas should be used to meet different requirements. - Abstract: Argon (Ar), nitrogen (N_2) and carbon dioxide (CO_2), present in exhaust gas recirculation (EGR) and air, are common atomic, diatomic and polyatomic inert gases, separately. As dilution gases, they are always added into the intake charge to reduce nitrogen oxides (NOx) emissions, directly or along with EGR and air. This paper presents the effects of Ar, N_2 and CO_2 on mixture properties, combustion, thermal efficiency and NOx emissions of pilot-ignited natural gas engines. Thermodynamic properties of the air-dilution gas mixture with increasing dilution gases, including density, gas constant, specific heat ratio, specific heat capacity, heat capacity and thermal diffusivity, were analyzed theoretically using thermodynamic relations and ideal gas equations based on experimental results. The thermal and diluent effects of dilution gases on NOx emissions were investigated based on Arrhenius Law and Zeldovich Mechanism, experimentally and theoretically. The experiments were arranged based on an electronically controlled heavy-duty, 6-cylinder, turbocharged, pilot-ignited natural gas engine. The resulted show that adding different inert gases into the intake charge had different influences on the thermodynamic properties of the air-dilution gas mixture. No great change in combustion phase was found with increasing dilution ratio (DR) of Ar, while the flame development duration increased significantly and CA50 moved far away from combustion top dead center (TDC) obviously with increasing DR for both of N_2 and CO_2. Adding Ar was superior in maintaining high thermal efficiencies than CO_2 and N_2, but adding CO_2 was superior in maintaining
Almutairy, Meznah; Torng, Eric
2017-01-01
One of the most common ways to search a sequence database for sequences that are similar to a query sequence is to use a k-mer index such as BLAST. A big problem with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing the space needed, and also query time, is sampling where only some k-mer occurrences are stored. Most previous work uses hard sampling, in which enough k-mer occurrences are retained so that all similar sequences are guaranteed to be found. In contrast, we study soft sampling, which further reduces the number of stored k-mer occurrences at a cost of decreasing query accuracy. We focus on finding highly similar local alignments (HSLA) over nucleotide sequences, an operation that is fundamental to biological applications such as cDNA sequence mapping. For our comparison, we use the NCBI BLAST tool with the human genome and human ESTs. When identifying HSLAs, we find that soft sampling significantly reduces both index size and query time with relatively small losses in query accuracy. For the human genome and HSLAs of length at least 100 bp, soft sampling reduces index size 4-10 times more than hard sampling and processes queries 2.3-6.8 times faster, while still achieving retention rates of at least 96.6%. When we apply soft sampling to the problem of mapping ESTs against the genome, we map more than 98% of ESTs perfectly while reducing the index size by a factor of 4 and query time by 23.3%. These results demonstrate that soft sampling is a simple but effective strategy for performing efficient searches for HSLAs. We also provide a new model for sampling with BLAST that predicts empirical retention rates with reasonable accuracy by modeling two key problem factors.
Robustness - theoretical framework
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Maximum intensity projection MR angiography using shifted image data
International Nuclear Information System (INIS)
Machida, Yoshio; Ichinose, Nobuyasu; Hatanaka, Masahiko; Goro, Takehiko; Kitake, Shinichi; Hatta, Junicchi.
1992-01-01
The quality of MR angiograms has been significantly improved in past several years. Spatial resolution, however, is not sufficient for clinical use. On the other hand, MR image data can be filled at anywhere using Fourier shift theorem, and the quality of multi-planar reformed image has been reported to be improved remarkably using 'shifted data'. In this paper, we have clarified the efficiency of 'shifted data' for maximum intensity projection MR angiography. Our experimental studies and theoretical consideration showd that the quality of MR angiograms has been significantly improved using 'shifted data' as follows; 1) remarkable reduction of mosaic artifact, 2) improvement of spatial continuity for the blood vessels, and 3) reduction of variance for the signal intensity along the blood vessels. In other words, the angiograms looks much 'finer' than conventional ones, although the spatial resolution is not improved theoretically. Furthermore, we found the quality of MR angiograms dose not improve significantly with the 'shifted data' more than twice as dense as ordinal ones. (author)
International Nuclear Information System (INIS)
Ryu, Han-Youl
2012-01-01
Based on the rate equation model of semiconductor lasers, the radiative efficiency and threshold current density of InGaN-based blue laser diodes (LDs) are theoretically investigated, including the effect of efficiency droop in the InGaN quantum wells. The peak point of the radiative efficiency versus current density relation is used as the parameter of the rate equation analysis. The threshold current density of InGaN blue LDs is found to depend strongly on the maximum radiative efficiency at low current density, implying that improving the maximum efficiency is important to maintain a high radiative efficiency at a large current density and to achieve a low-threshold lasing action under the influence of efficiency droop.
Theoretical Physics 1. Theoretical Mechanics
International Nuclear Information System (INIS)
Dreizler, Reiner M.; Luedde, Cora S.
2010-01-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. (orig.)
Theoretical Physics 1. Theoretical Mechanics
Energy Technology Data Exchange (ETDEWEB)
Dreizler, Reiner M.; Luedde, Cora S. [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik
2010-07-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. (orig.)
DEFF Research Database (Denmark)
Asadi, Meisam; Asadi, Amin; Aberoumand, Sadegh
2018-01-01
The present work aims to study heat transfer performance and pumping power of MgO-MWCNT/ thermal oil hybrid nanofluid. Using a KD2 Pro thermal analyzer, the thermal conductivity of the samples have been measured. The results showed an increasing trend for the thermal conductivity of the nanofluid...... by increasing the mass concentration and temperature, in which the maximum enhancement of thermal conductivity was approximately 65%. Predicting the thermal conductivity of the nanofluid, a highly accurate correlation in terms of solid concentration and temperature has been proposed. Moreover, the heat transfer...... nanofluid is highly efficient in heat transfer applications as a coolant fluid in both the laminar and turbulent flow regimes, although it causes a certain penalty in the pumping power....
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Maximum margin semi-supervised learning with irrelevant data.
Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R
2015-10-01
Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright
Theoretical Mechanics Theoretical Physics 1
Dreizler, Reiner M
2011-01-01
After an introduction to basic concepts of mechanics more advanced topics build the major part of this book. Interspersed is a discussion of selected problems of motion. This is followed by a concise treatment of the Lagrangian and the Hamiltonian formulation of mechanics, as well as a brief excursion on chaotic motion. The last chapter deals with applications of the Lagrangian formulation to specific systems (coupled oscillators, rotating coordinate systems, rigid bodies). The level of this textbook is advanced undergraduate. The authors combine teaching experience of more than 40 years in all fields of Theoretical Physics and related mathematical disciplines and thorough knowledge in creating advanced eLearning content. The text is accompanied by an extensive collection of online material, in which the possibilities of the electronic medium are fully exploited, e.g. in the form of applets, 2D- and 3D-animations. - A collection of 74 problems with detailed step-by-step guidance towards the solutions. - A col...
DEFF Research Database (Denmark)
Asadi, Meisam; Asadi, Amin; Aberoumand, Sadegh
2018-01-01
The present work aims to study heat transfer performance and pumping power of MgO-MWCNT/ thermal oil hybrid nanofluid. Using a KD2 Pro thermal analyzer, the thermal conductivity of the samples have been measured. The results showed an increasing trend for the thermal conductivity of the nanofluid...... nanofluid is highly efficient in heat transfer applications as a coolant fluid in both the laminar and turbulent flow regimes, although it causes a certain penalty in the pumping power....... efficiency and pumping power in all the studied range of solid concentrations and temperatures have been theoretically investigated, based on the experimental data of dynamic viscosity and thermal conductivity, for both the internal laminar and turbulent flow regimes. It was observed that the studied......The present work aims to study heat transfer performance and pumping power of MgO-MWCNT/ thermal oil hybrid nanofluid. Using a KD2 Pro thermal analyzer, the thermal conductivity of the samples have been measured. The results showed an increasing trend for the thermal conductivity of the nanofluid...
A maximum power point tracking for photovoltaic-SPE system using a maximum current controller
Energy Technology Data Exchange (ETDEWEB)
Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)
2003-02-01
Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
Rational Design of Spur Gears Directed to Increase Efficiency and Decrease Loss by Friction
Gonzalo González Rey; Alejandra García Toll; María Eugenia García Domínguez
2010-01-01
External parallel-axis cylindrical gears are considered as very efficient means for transmitting mechanical power, but for requirements of maximum efficiency in the current machines and equipments a precision in the procedures of calculation of power losses is necessary. In this sense, the Technical Report ISO / TR 14179-1:2001 offers formulas with empirical and theoretical bases to evaluate the gear efficiency considering gear mesh losses, windage and churning losses, and losses by bearings ...
Parameters and error of a theoretical model
International Nuclear Information System (INIS)
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs
Particle Swarm Optimization Based of the Maximum Photovoltaic ...
African Journals Online (AJOL)
Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...
maximum conversion efficiency of thermionic heat to electricity
African Journals Online (AJOL)
DJFLEX
Dushman constant ... Several attempts on the direct conversion of heat to electricity ... The net current density in the system is equal to jE – jC , which gets over the potential barrier. jE and jC are given by the Richardson-. Dushman equation as. │. ⌋.
Maximum herd efficiency in meat production I. Optima for slaughter ...
African Journals Online (AJOL)
Optimal replacement involves either the minimum or maximumrate that can be achieved, and depends on the relative costs and output involved in the keeping of different age classes of reproduction animals. Finally, the relationship between replacement rate and herd age structure is explained. Die winsverhoudingby ...
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
International Nuclear Information System (INIS)
Flamigni, Luca; Koch, Joachim; Günther, Detlef
2012-01-01
Current quantification capabilities of laser ablation inductively-coupled plasma mass spectrometry (LA-ICP-MS) are known to be restricted by elemental fractionation as a result of LA-, transport-, and ICP-induced effects which, particularly, may provoke inaccuracies whenever calibration strategies on the basis of non-matrix matched standard materials are applied. The present study is dealing with the role of ICP in this complex scenario. Therefore, the vaporization process of laser-produced aerosols and subsequent diffusion losses occurring inside ICP sources were investigated using 2-D optical emission spectrometry (OES) and ICP-quadrupole (Q)MS of individual particles. For instance, Na- and Ca-specific OES of aerosols produced by LA of silicate glasses or metals revealed axial shifts in the onset and maximum position of atomic emission which were in the range of a few millimeters. The occurrence of these shifts was found to arise from composition-dependent particle/aerosol penetration depths, i.e. the displacement of axial vaporization starting points controlling the ion extraction efficiency through the ICP-MS vacuum interface due to a delayed, diffusion-driven expansion of oxidic vs. metallic aerosols. Furthermore, ICP-QMS of individual particles resulted in 1/e half-value signal durations of approximately 100 μs, which complies with modeled values if OES maxima are assumed to coincide with positions of instantaneous vaporization and starting points for atomic diffusion. To prove phenomena observed for their consistency, in addition, “ab initio” as well as semi-empirical simulations of particle/aerosol penetration depths followed by diffusion-driven expansion was accomplished indicating differences of up to 15% in the relative ion extraction efficiency depending on whether analytes are supplied as metals or oxides. Implications of these findings on the accuracy achievable by state-of-the-art LA-ICP-MS systems are outlined. - Highlights: ► Specification
Feedback Limits to Maximum Seed Masses of Black Holes
International Nuclear Information System (INIS)
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-01-01
The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Maximum power operation of interacting molecular motors
DEFF Research Database (Denmark)
Golubeva, Natalia; Imparato, Alberto
2013-01-01
, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks
2016-08-29
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 1 Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks Thomas...2 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS I. INTRODUCTION Tactical military networks both on land and at sea often have restricted transmission...a standard definition in graph theoretic and networking literature that is related to, but different from, the metric we consider. August 29, 2016
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Regional and global exergy and energy efficiencies
Energy Technology Data Exchange (ETDEWEB)
Nakicenovic, N; Kurz, R [International Inst. for Applied Systems Analysis, Laxenburg (Austria). Environmentally Compatible Energy Strategies (Ecuador) Project; Gilli, P V [Graz Univ. of Technology (Austria)
1996-03-01
We present estimates of global energy efficiency by applying second-law (exergy) analysis to regional and global energy balances. We use a uniform analysis of national and regional energy balances and aggregate these balances first for three main economic regions and subsequently into world totals. The procedure involves assessment of energy and exergy efficiencies at each step of energy conversion, from primary exergy to final and useful exergy. Ideally, the analysis should be extended to include actual delivered energy services; unfortunately, data are scarce and only rough estimates can be given for this last stage of energy conversion. The overall result is that the current global primary to useful exergy efficiency is about one-tenth of the theoretical maximum and the service efficiency is even lower. (Author)
DEFF Research Database (Denmark)
Tsilipakos, O.; Pitilakis, A.; Yioultsis, T. V.
2012-01-01
A comprehensive theoretical analysis of end-fire coupling between dielectric-loaded surface plasmon polariton and rib/wire silicon-on-insulator (SOI) waveguides is presented. Simulations are based on the 3-D vector finite element method. The geometrical parameters of the interface are varied...... in order to identify the ones leading to optimum performance, i.e., maximum coupling efficiency. Fabrication tolerances about the optimum parameter values are also assessed. In addition, the effect of a longitudinal metallic stripe gap on coupling efficiency is quantified, since such gaps have been...
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Combining Experiments and Simulations Using the Maximum Entropy Principle
DEFF Research Database (Denmark)
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Direct maximum parsimony phylogeny reconstruction from genotype data
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-01-01
Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Information theoretical assessment of visual communication with wavelet coding
Rahman, Zia-ur
1995-06-01
A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.
A comparative study of the maximum power point tracking methods for PV systems
International Nuclear Information System (INIS)
Liu, Yali; Li, Ming; Ji, Xu; Luo, Xi; Wang, Meidi; Zhang, Ying
2014-01-01
Highlights: • An improved maximum power point tracking method for PV system was proposed. • Theoretical derivation procedure of the proposed method was provided. • Simulation models of MPPT trackers were established based on MATLAB/Simulink. • Experiments were conducted to verify the effectiveness of the proposed MPPT method. - Abstract: Maximum power point tracking (MPPT) algorithms play an important role in the optimization of the power and efficiency of a photovoltaic (PV) generation system. According to the contradiction of the classical Perturb and Observe (P and Oa) method between the corresponding speed and the tracking accuracy on steady-state, an improved P and O (P and Ob) method has been put forward in this paper by using the Atken interpolation algorithm. To validate the correctness and performance of the proposed method, simulation and experimental study have been implemented. Simulation models of classical P and Oa method and improved P and Ob method have been established by MATLAB/Simulink to analyze each technique under varying solar irradiation and temperature. The experimental results show that the tracking efficiency of P and Ob method is an average of 93% compared to 72% for P and Oa method, this conclusion basically agree with the simulation study. Finally, we proposed the applicable conditions and scope of these MPPT methods in the practical application
Photovoltaic System Modeling with Fuzzy Logic Based Maximum Power Point Tracking Algorithm
Directory of Open Access Journals (Sweden)
Hasan Mahamudul
2013-01-01
Full Text Available This paper represents a novel modeling technique of PV module with a fuzzy logic based MPPT algorithm and boost converter in Simulink environment. The prime contributions of this work are simplification of PV modeling technique and implementation of fuzzy based MPPT system to track maximum power efficiently. The main highlighted points of this paper are to demonstrate the precise control of the duty cycle with respect to various atmospheric conditions, illustration of PV characteristic curves, and operation analysis of the converter. The proposed system has been applied for three different PV modules SOLKAR 36 W, BP MSX 60 W, and KC85T 87 W. Finally the resultant data has been compared with the theoretical prediction and company specified value to ensure the validity of the system.
Comparison of P&O and INC Methods in Maximum Power Point Tracker for PV Systems
Chen, Hesheng; Cui, Yuanhui; Zhao, Yue; Wang, Zhisen
2018-03-01
In the context of renewable energy, the maximum power point tracker (MPPT) is often used to increase the solar power efficiency, taking into account the randomness and volatility of solar energy due to changes in temperature and photovoltaic. In all MPPT techniques, perturb & observe and incremental conductance are widely used in MPPT controllers, because of their simplicity and ease of operation. According to the internal structure of the photovoltaic cell and the output volt-ampere characteristic, this paper established the circuit model and establishes the dynamic simulation model in Matlab/Simulink with the preparation of the s function. The perturb & observe MPPT method and the incremental conductance MPPT method were analyzed and compared by the theoretical analysis and digital simulation. The simulation results have shown that the system with INC MPPT method has better dynamic performance and improves the output power of photovoltaic power generation.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
International Nuclear Information System (INIS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)
Feedback Limits to Maximum Seed Masses of Black Holes
Energy Technology Data Exchange (ETDEWEB)
Pacucci, Fabio; Natarajan, Priyamvada [Department of Physics, Yale University, P.O. Box 208121, New Haven, CT 06520 (United States); Ferrara, Andrea [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)
2017-02-01
The most massive black holes observed in the universe weigh up to ∼10{sup 10} M {sub ⊙}, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M {sub •} ≳ 10{sup 4} M {sub ⊙}) hosted in small isolated halos ( M {sub h} ≲ 10{sup 9} M {sub ⊙}) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M {sub •}– σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10{sup 4–6} M {sub ⊙}, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Maximum likelihood of phylogenetic networks.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2006-11-01
Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Directory of Open Access Journals (Sweden)
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Numerical modelling of high efficiency InAs/GaAs intermediate band solar cell
Imran, Ali; Jiang, Jianliang; Eric, Debora; Yousaf, Muhammad
2018-01-01
Quantum Dots (QDs) intermediate band solar cells (IBSC) are the most attractive candidates for the next generation of photovoltaic applications. In this paper, theoretical model of InAs/GaAs device has been proposed, where we have calculated the effect of variation in the thickness of intrinsic and IB layer on the efficiency of the solar cell using detailed balance theory. IB energies has been optimized for different IB layers thickness. Maximum efficiency 46.6% is calculated for IB material under maximum optical concentration.
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
A Theoretical Study of Microwave Beam Absorption by a Rectenna
Ott, J. H.; Rice, J. S.; Thorn, D. C.
1981-01-01
The theoretical operational parameters for the workable satellite power system were examined. The system requirements for efficient transmission and reception of an environmentally benign microwave beam were determined.
Maximum super angle optimization method for array antenna pattern synthesis
DEFF Research Database (Denmark)
Wu, Ji; Roederer, A. G
1991-01-01
Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...
A conrparison of optirnunl and maximum reproduction using the rat ...
African Journals Online (AJOL)
of pigs to increase reproduction rate of sows (te Brake,. 1978; Walker et at., 1979; Kemm et at., 1980). However, no experimental evidence exists that this strategy would in fact improve biological efficiency. In this pilot experiment, an attempt was made to compare systems of optimum or maximum reproduction using the rat.
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
INTEGRAL ASSESSMENT OF EFFICIENCY OF A FLEET OF AGRICULTURAL MACHINERY AND TRACTORS
Directory of Open Access Journals (Sweden)
V. M. Korotchenya
2015-01-01
Full Text Available An indicator for an integral assessment of efficiency of a fleet of agricultural machinery and tractors is proposed. Its calculating consists in multiplying partial efficiencies together: technical and price efficiency of agricultural production, and eco-efficiency of a fleet of agricultural machinery and tractors. Using an axiomatic method, the concept of efficiency of fleet was studied within broader category of productivity and in wider scales - within agriculture. The most general and obvious statements according to which efficiency of machine and tractor fleet is considered as indirectly settlement size counted proceeding from efficiency of agricultural production, were accepted as axioms. In general, efficiency is a ratio between productivity of an object in question and that of the theoretical ideal or an object with maximum productivity efficiency of which is accepted equal to unity or 100%. Technical efficiency of agricultural production characterizes the ability of an agricultural sector to produce the technically maximum output of agricultural produce obtained from the available resources (land, labor, machines, etc.. Price efficiency evaluates the ability of an agricultural sector to produce output of agricultural produce with regard to employing the optimal mix of resources given their prices. Because the theoretical ideal does not exist, efficiency of an agricultural sector, e.g. in Russia and of its fleet of agricultural machinery and tractors, is measured against agriculture of another country with maximum productivity efficiency of which is accepted as equal to unity or 100 percent. The calculation can be made on the basis of the Data Envelopment Analysis method. It is possible to calculate price and economic efficiency of Russian agriculture (fleet of agricultural machinery and tractors due to use its domestic prices.
Parametric optimization of thermoelectric elements footprint for maximum power generation
DEFF Research Database (Denmark)
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap
Theoretical study of liquid droplet dispersion in a venturi scrubber.
Fathikalajahi, J; Talaie, M R; Taheri, M
1995-03-01
The droplet concentration distribution in an atomizing scrubber was calculated based on droplet eddy diffusion by a three-dimensional dispersion model. This model is also capable of predicting the liquid flowing on the wall. The theoretical distribution of droplet concentration agrees well with experimental data given by Viswanathan et al. for droplet concentration distribution in a venturi-type scrubber. The results obtained by the model show a non-uniform distribution of drops over the cross section of the scrubber, as noted by the experimental data. While the maximum of droplet concentration distribution may depend on many operating parameters of the scrubber, the results of this study show that the highest uniformity of drop distribution will be reached when penetration length is approximately equal to one-fourth of the depth of the scrubber. The results of this study can be applied to evaluate the removal efficiency of a venturi scrubber.
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Theoretical solid state physics
International Nuclear Information System (INIS)
Anon.
1977-01-01
Research activities at ORNL in theoretical solid state physics are described. Topics covered include: surface studies; particle-solid interactions; electronic and magnetic properties; and lattice dynamics
Maximum Power Tracking by VSAS approach for Wind Turbine, Renewable Energy Sources
Directory of Open Access Journals (Sweden)
Nacer Kouider Msirdi
2015-08-01
Full Text Available This paper gives a review of the most efficient algorithms designed to track the maximum power point (MPP for catching the maximum wind power by a variable speed wind turbine (VSWT. We then design a new maximum power point tracking (MPPT algorithm using the Variable Structure Automatic Systems approach (VSAS. The proposed approachleads efficient algorithms as shown in this paper by the analysis and simulations.
Maximum entropy deconvolution of low count nuclear medicine images
International Nuclear Information System (INIS)
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Information theoretic description of networks
Wilhelm, Thomas; Hollunder, Jens
2007-11-01
We present a new information theoretic approach for network characterizations. It is developed to describe the general type of networks with n nodes and L directed and weighted links, i.e., it also works for the simpler undirected and unweighted networks. The new information theoretic measures for network characterizations are based on a transmitter-receiver analogy of effluxes and influxes. Based on these measures, we classify networks as either complex or non-complex and as either democracy or dictatorship networks. Directed networks, in particular, are furthermore classified as either information spreading and information collecting networks. The complexity classification is based on the information theoretic network complexity measure medium articulation (MA). It is proven that special networks with a medium number of links ( L∼n1.5) show the theoretical maximum complexity MA=(log n)2/2. A network is complex if its MA is larger than the average MA of appropriately randomized networks: MA>MAr. A network is of the democracy type if its redundancy Rdictatorship network. In democracy networks all nodes are, on average, of similar importance, whereas in dictatorship networks some nodes play distinguished roles in network functioning. In other words, democracy networks are characterized by cycling of information (or mass, or energy), while in dictatorship networks there is a straight through-flow from sources to sinks. The classification of directed networks into information spreading and information collecting networks is based on the conditional entropies of the considered networks ( H(A/B)=uncertainty of sender node if receiver node is known, H(B/A)=uncertainty of receiver node if sender node is known): if H(A/B)>H(B/A), it is an information collecting network, otherwise an information spreading network. Finally, different real networks (directed and undirected, weighted and unweighted) are classified according to our general scheme.
International Nuclear Information System (INIS)
This report is a survey of the studies done in the Theoretical Physics Division of the Nuclear Physics Institute; the subjects studied in theoretical nuclear physics were the few-nucleon problem, nuclear structure, nuclear reactions, weak interactions, intermediate energy and high energy physics. In this last field, the subjects studied were field theory, group theory, symmetry and strong interactions [fr
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Sogukpinar, Haci; Bozkurt, Ismail
2018-02-01
Aerodynamic performance of the airfoil plays the most important role to obtain economically maximum efficiency from a wind turbine. Therefore airfoil should have an ideal aerodynamic shape. In this study, aerodynamic simulation of S809 airfoil is conducted and obtained result compared with previously made NASA experimental result and NREL theoretical data. At first, Lift coefficient, lift to drag ratio and pressure coefficient around S809 airfoil are calculated with SST turbulence model, and are compared with experimental and other theoretical data to correlate simulation correctness of the computational approaches. And result indicates good correlation with both experimental and theoretical data. This calculation point out that as the increasing relative velocity, lift to drag ratio increases. Lift to drag ratio attain maximum at the angle around 6 degree and after that starts to decrease again. Comparison shows that CFD code used in this calculation can predict aerodynamic properties of airfoil.
Optimal design of the gerotor (2-ellipses) for reducing maximum contact stress
Energy Technology Data Exchange (ETDEWEB)
Kwak, Hyo Seo; Li, Sheng Huan [Dept. of Mechanical Convergence Technology, Pusan National University, Busan (Korea, Republic of); Kim, Chul [School of Mechanical Design and Manufacturing, Busan Institute of Science and Technology, Busan (Korea, Republic of)
2016-12-15
The oil pump, which is used as lubricator of engines and auto transmission, supplies working oil to the rotating elements to prevent wear. The gerotor pump is used widely in the automobile industry. When wear occurs due to contact between an inner rotor and an outer rotor, the efficiency of the gerotor pump decreases rapidly, and elastic deformation from the contacts also causes vibration and noise. This paper reports the optimal design of a gerotor with a 2-ellipses combined lobe shape that reduces the maximum contact stress. An automatic program was developed to calculate Hertzian contact stress of the gerotor using the Matlab and the effect of the design parameter on the maximum contact stress was analyzed. In addition, the method of theoretical analysis for obtaining the contact stress was verified by performing the fluid-structural coupled analysis using the commercial software, Ansys, considering both the driving force of the inner rotor and the fluid pressure, which is generated by working oil.
Machine learning a theoretical approach
Natarajan, Balas K
2014-01-01
This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
International Nuclear Information System (INIS)
Wang, P.-Y.; Hou, S.-S.
2005-01-01
In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions
Sundarraj, Pradeepkumar; Taylor, Robert A.; Banerjee, Debosmita; Maity, Dipak; Sinha Roy, Susanta
2017-01-01
Hybrid solar thermoelectric generators (HSTEGs) have garnered significant research attention recently due to their potential ability to cogenerate heat and electricity. In this paper, theoretical and experimental investigations of the electrical and thermal performance of a HSTEG system are reported. In order to validate the theoretical model, a laboratory scale HSTEG system (based on forced convection cooling) is developed. The HSTEG consists of six thermoelectric generator modules, an electrical heater, and a stainless steel cooling block. Our experimental analysis shows that the HSTEG is capable of producing a maximum electrical power output of 4.7 W, an electrical efficiency of 1.2% and thermal efficiency of 61% for an average temperature difference of 92 °C across the TEG modules with a heater power input of 382 W. These experimental results of the HSTEG system are found to be in good agreement with the theoretical prediction. This experimental/theoretical analysis can also serve as a guide for evaluating the performance of the HSTEG system with forced convection cooling.
Robust recognition via information theoretic learning
He, Ran; Yuan, Xiaotong; Wang, Liang
2014-01-01
This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip
Maximum power point tracker based on fuzzy logic
International Nuclear Information System (INIS)
Daoud, A.; Midoun, A.
2006-01-01
The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and
Blatt, John M
1979-01-01
A classic work by two leading physicists and scientific educators endures as an uncommonly clear and cogent investigation and correlation of key aspects of theoretical nuclear physics. It is probably the most widely adopted book on the subject. The authors approach the subject as ""the theoretical concepts, methods, and considerations which have been devised in order to interpret the experimental material and to advance our ability to predict and control nuclear phenomena.""The present volume does not pretend to cover all aspects of theoretical nuclear physics. Its coverage is restricted to
A maximum entropy reconstruction technique for tomographic particle image velocimetry
International Nuclear Information System (INIS)
Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P
2013-01-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)
On the maximum of wave surface of sea waves
Energy Technology Data Exchange (ETDEWEB)
Zhang, B
1980-01-01
This article considers wave surface as a normal stationary random process to solve the estimation of the maximum of wave surface in a given time interval by means of the theoretical results of probability theory. The results are represented by formulas (13) to (19) in this article. It was proved in this article that when time interval approaches infinite, the formulas (3), (6) of E )eta max) that were derived from the references (Cartwright, Longuet-Higgins) can also be derived by asymptotic distribution of the maximum of wave surface provided by the article. The advantage of the results obtained from this point of view as compared with the results obtained from the references was discussed.
Optimum detection for extracting maximum information from symmetric qubit sets
International Nuclear Information System (INIS)
Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.
2002-01-01
We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit
DEFF Research Database (Denmark)
2002-01-01
The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...
Summary on Theoretical Aspects
Soffer, Jacques
2010-01-01
During the five days of this conference a very dense scientific program has enlighted our research fields, with the presentation of large number of interesting lectures. I will try to summarize the theoretical aspects of some of these new results.
International Nuclear Information System (INIS)
Anon.
The studies in 1977 are reviewed. In theoretical nuclear physics: nuclear structure, nuclear reactions, intermediate energy physics; in elementary particle physics: field theory, strong interactions dynamics, nucleon-nucleon interactions, new particles, current algebra, symmetries and quarks are studied [fr
International Nuclear Information System (INIS)
Anon.
1980-01-01
Research activities of the theoretical physics division for 1979 are described. Short summaries are given of specific research work in the following fields: nuclear structure, nuclear reactions, intermediate energy physics, elementary particles [fr
African Journals Online (AJOL)
NICO
L-rhamnose and L-fucose: A Theoretical Approach ... L-ramnose and L-fucose, by means of the Monte Carlo conformational search method. The energy of the conformers ..... which indicates an increased probability for the occurrence of.
Maximum discharge rate of liquid-vapor mixtures from vessels
International Nuclear Information System (INIS)
Moody, F.J.
1975-09-01
A discrepancy exists in theoretical predictions of the two-phase equilibrium discharge rate from pipes attached to vessels. Theory which predicts critical flow data in terms of pipe exit pressure and quality severely overpredicts flow rates in terms of vessel fluid properties. This study shows that the discrepancy is explained by the flow pattern. Due to decompression and flashing as fluid accelerates into the pipe entrance, the maximum discharge rate from a vessel is limited by choking of a homogeneous bubbly mixture. The mixture tends toward a slip flow pattern as it travels through the pipe, finally reaching a different choked condition at the pipe exit
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
On the maximum entropy distributions of inherently positive nuclear data
Energy Technology Data Exchange (ETDEWEB)
Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.
2017-05-11
The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.
MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW
Directory of Open Access Journals (Sweden)
I. Elzein
2015-01-01
Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Directory of Open Access Journals (Sweden)
Damasen Ikwaba Paul
2015-01-01
Full Text Available This paper presents theoretical and experimental optical evaluation and comparison of symmetric Compound Parabolic Concentrator (CPC and V-trough collector. For direct optical properties comparison, both concentrators were deliberately designed to have the same geometrical concentration ratio (1.96, aperture area, absorber area, and maximum concentrator length. The theoretical optical evaluation of the CPC and V-trough collector was carried out using a ray-trace technique while the experimental optical efficiency and solar energy flux distributions were analysed using an isolated cell PV module method. Results by simulation analysis showed that for the CPC, the highest optical efficiency was 95% achieved in the interval range of 0° to ±20° whereas the highest outdoor experimental optical efficiency was 94% in the interval range of 0° to ±20°. For the V-tough collector, the highest optical efficiency for simulation and outdoor experiments was about 96% and 93%, respectively, both in the interval range of 0° to ±5°. Simulation results also showed that the CPC and V-trough exhibit higher variation in non-illumination intensity distributions over the PV module surface for larger incidence angles than lower incidence angles. On the other hand, the maximum power output for the cells with concentrators varied depending on the location of the cell in the PV module.
International Nuclear Information System (INIS)
Esfandiar, Habib; KoraYem, Moharam Habibnejad
2015-01-01
In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.
Energy Technology Data Exchange (ETDEWEB)
Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)
2015-09-15
In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Towards A Theoretical Biology: Reminiscences
Indian Academy of Sciences (India)
engaged in since the start of my career at the University of Chicago. Theoretical biology was ... research on theoretical problems in biology. Waddington, an ... aimed at stimulating the development of such a theoretical biology. The role the ...
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Energy Technology Data Exchange (ETDEWEB)
Cohen, Andrew [Boston Univ., MA (United States); Schmaltz, Martin [Boston Univ., MA (United States); Katz, Emmanuel [Boston Univ., MA (United States); Rebbi, Claudio [Boston Univ., MA (United States); Glashow, Sheldon [Boston Univ., MA (United States); Brower, Richard [Boston Univ., MA (United States); Pi, So-Young [Boston Univ., MA (United States)
2016-09-30
interactions between quark and gluon particles, we have no clear idea how to express the proton state in terms of these quarks and gluons. This is because the proton, though a bound state of quarks and gluons, is not a state of a fixed number of particles due to strong interactions. Yet, understanding the proton state is very important in order to theoretically predict the reaction rates observed at the LHC in Geneva, which is a proton-proton collider. Katz has formulated a new approach to QFT, which among other things offers a way to adequately approximate the quantum wave function of a bound state at strong coupling. The approximation scheme is related to the fact that any sensible QFT (including that of the strong interactions) is at short distances approximately self-similar upon rescaling of space and time. It turns out that keeping track of the response upon this rescaling is important in efficiently parameterizing the state. Katz and collaborators have used this observation to approximate the state of the proton in toy versions of the strong force. In the late 60s Sheldon Glashow, Abdus Salam and Steven Weinberg (1979 Nobel Prize awardees) proposed a theory unifying weak and electromagnetic interaction which assumed the existence of new particles, the W and Z bosons. The W and Z bosons were eventually detected in high-energy collision in a particle accelerator at CERN, and the recent discovery of the Higgs meson at the Large Hadron Collider (LHC), always at CERN, completed the picture. However, deep theoretical considerations indicate that the theory by Glashow, Weinberg and Salam, often referred to as "the standard model" cannot be the whole story: the existence of new particles and new interactions at yet higher energies is widely anticipated. The experiments at the LHC are looking for these, while theorists, like Brower, Rebbi and collaborators, are investigating models for these new interactions. Working in a large national collaboration with access to the most
International Nuclear Information System (INIS)
Cohen, Andrew; Schmaltz, Martin; Katz, Emmanuel; Rebbi, Claudio; Glashow, Sheldon; Brower, Richard; Pi, So-Young
2016-01-01
and gluon particles, we have no clear idea how to express the proton state in terms of these quarks and gluons. This is because the proton, though a bound state of quarks and gluons, is not a state of a fixed number of particles due to strong interactions. Yet, understanding the proton state is very important in order to theoretically predict the reaction rates observed at the LHC in Geneva, which is a proton-proton collider. Katz has formulated a new approach to QFT, which among other things offers a way to adequately approximate the quantum wave function of a bound state at strong coupling. The approximation scheme is related to the fact that any sensible QFT (including that of the strong interactions) is at short distances approximately self-similar upon rescaling of space and time. It turns out that keeping track of the response upon this rescaling is important in efficiently parameterizing the state. Katz and collaborators have used this observation to approximate the state of the proton in toy versions of the strong force. In the late 60s Sheldon Glashow, Abdus Salam and Steven Weinberg (1979 Nobel Prize awardees) proposed a theory unifying weak and electromagnetic interaction which assumed the existence of new particles, the W and Z bosons. The W and Z bosons were eventually detected in high-energy collision in a particle accelerator at CERN, and the recent discovery of the Higgs meson at the Large Hadron Collider (LHC), always at CERN, completed the picture. However, deep theoretical considerations indicate that the theory by Glashow, Weinberg and Salam, often referred to as 'the standard model' cannot be the whole story: the existence of new particles and new interactions at yet higher energies is widely anticipated. The experiments at the LHC are looking for these, while theorists, like Brower, Rebbi and collaborators, are investigating models for these new interactions. Working in a large national collaboration with access to the most powerful DOE computers
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
International Nuclear Information System (INIS)
Luo, Xiaoguang; Long, Kailin; Wang, Jun; Qiu, Teng; He, Jizhou; Liu, Nian
2014-01-01
Theoretical thermoelectric nanophysics models of low-dimensional electronic heat engine and refrigerator devices, comprising two-dimensional hot and cold reservoirs and an interconnecting filtered electron transport mechanism have been established. The models were used to numerically simulate and evaluate the thermoelectric performance and energy conversion efficiencies of these low-dimensional devices, based on three different types of electron transport momentum-dependent filters, referred to herein as k x , k y , and k r filters. Assuming the Fermi-Dirac distribution of electrons, expressions for key thermoelectric performance parameters were derived for the resonant transport processes, in which the transmission of electrons has been approximated as a Lorentzian resonance function. Optimizations were carried out and the corresponding optimized design parameters have been determined, including but not limited to the universal theoretical upper bound of the efficiency at maximum power for heat engines, and the maximum coefficient of performance for refrigerators. From the results, it was determined that k r filter delivers the best thermoelectric performance, followed by the k x filter, and then the k y filter. For refrigerators with any one of three filters, an optimum range for the full width at half maximum of the transport resonance was found to be B T.
Microprocessor-controlled step-down maximum-power-point tracker for photovoltaic systems
Mazmuder, R. K.; Haidar, S.
1992-12-01
An efficient maximum power point tracker (MPPT) has been developed and can be used with a photovoltaic (PV) array and a load which requires lower voltage than the PV array voltage to be operated. The MPPT makes the PV array to operate at maximum power point (MPP) under all insolation and temperature, which ensures the maximum amount of available PV power to be delivered to the load. The performance of the MPPT has been studied under different insolation levels.
'Impulsar': Experimental and Theoretical Investigations
International Nuclear Information System (INIS)
Apollonov, V. V.
2008-01-01
The Objective of the 'Impulsar' project is to accomplish a circle of experimental, engineering and technological works on creation of a high efficiency laser rocket engine. The project includes many organizations of the rocket industry and Academy of Sciences of Russia. High repetition rate pulse-periodic CO 2 laser system project for launching will be presented. Optical system for 15 MW laser energy delivery and optical matrix of laser engine receiver will by discussed as well. Basic characteristics of the laser-based engine will be compared with theoretical predictions and important stages of further technology implementation (low frequency resonance). Relying on a wide cooperation of different branches of science and industry organizations it is very possible to use the accumulated potential for launching of nano-vehicles during the upcoming 4-5 years
Theoretical information reuse and integration
Rubin, Stuart
2016-01-01
Information Reuse and Integration addresses the efficient extension and creation of knowledge through the exploitation of Kolmogorov complexity in the extraction and application of domain symmetry. Knowledge, which seems to be novel, can more often than not be recast as the image of a sequence of transformations, which yield symmetric knowledge. When the size of those transformations and/or the length of that sequence of transforms exceeds the size of the image, then that image is said to be novel or random. It may also be that the new knowledge is random in that no such sequence of transforms, which produces it exists, or is at least known. The nine chapters comprising this volume incorporate symmetry, reuse, and integration as overt operational procedures or as operations built into the formal representations of data and operators employed. Either way, the aforementioned theoretical underpinnings of information reuse and integration are supported.
The Betz-Joukowsky limit for the maximum power coefficient of wind turbines
DEFF Research Database (Denmark)
Okulov, Valery; van Kuik, G.A.M.
2009-01-01
The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...
Direct maximum parsimony phylogeny reconstruction from genotype data.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-12-05
Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
A performance analysis for MHD power cycles operating at maximum power density
International Nuclear Information System (INIS)
Sahin, Bahri; Kodal, Ali; Yavuz, Hasbi
1996-01-01
An analysis of the thermal efficiency of a magnetohydrodynamic (MHD) power cycle at maximum power density for a constant velocity type MHD generator has been carried out. The irreversibilities at the compressor and the MHD generator are taken into account. The results obtained from power density analysis were compared with those of maximum power analysis. It is shown that by using the power density criteria the MHD cycle efficiency can be increased effectively. (author)
Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir
2011-01-01
Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353
Exploring the efficiency potential for an active magnetic regenerator
DEFF Research Database (Denmark)
Eriksen, Dan; Engelbrecht, Kurt; Haffenden Bahl, Christian Robert
2016-01-01
A novel rotary state of the art active magnetic regenerator refrigeration prototype was used in an experimental investigation with special focus on efficiency. Based on an applied cooling load, measured shaft power, and pumping power applied to the active magnetic regenerator, a maximum second-la...... and replacing the packed spheres with a theoretical parallel plate regenerator. Furthermore, significant potential efficiency improvements through optimized regenerator geometries are estimated and discussed......., especially for the pressure drop, significant improvements can be made to the machine. However, a large part of the losses may be attributed to regenerator irreversibilities. Considering these unchanged, an estimated upper limit to the second-law efficiency of 30% is given by eliminating parasitic losses...
Aerodynamic Limits on Large Civil Tiltrotor Sizing and Efficiency
Acree, C W.
2014-01-01
The NASA Large Civil Tiltrotor (2nd generation, or LCTR2) is a useful reference design for technology impact studies. The present paper takes a broad view of technology assessment by examining the extremes of what aerodynamic improvements might hope to accomplish. Performance was analyzed with aerodynamically idealized rotor, wing, and airframe, representing the physical limits of a large tiltrotor. The analysis was repeated with more realistic assumptions, which revealed that increased maximum rotor lift capability is potentially more effective in improving overall vehicle efficiency than higher rotor or wing efficiency. To balance these purely theoretical studies, some practical limitations on airframe layout are also discussed, along with their implications for wing design. Performance of a less efficient but more practical aircraft with non-tilting nacelles is presented.
Overview of Ecological Agriculture with High Efficiency
Huang, Guo-qin; Zhao, Qi-guo; Gong, Shao-lin; Shi, Qing-hua
2012-01-01
From the presentation, connotation, characteristics, principles, pattern, and technologies of ecological agriculture with high efficiency, we conduct comprehensive and systematic analysis and discussion of the theoretical and practical progress of ecological agriculture with high efficiency. (i) Ecological agriculture with high efficiency was first advanced in China in 1991. (ii) Ecological agriculture with high efficiency highlights "high efficiency", "ecology", and "combination". (iii) Ecol...
Compendium of theoretical physics
Wachter, Armin
2006-01-01
Mechanics, Electrodynamics, Quantum Mechanics, and Statistical Mechanics and Thermodynamics comprise the canonical undergraduate curriculum of theoretical physics. In Compendium of Theoretical Physics, Armin Wachter and Henning Hoeber offer a concise, rigorous and structured overview that will be invaluable for students preparing for their qualifying examinations, readers needing a supplement to standard textbooks, and research or industrial physicists seeking a bridge between extensive textbooks and formula books. The authors take an axiomatic-deductive approach to each topic, starting the discussion of each theory with its fundamental equations. By subsequently deriving the various physical relationships and laws in logical rather than chronological order, and by using a consistent presentation and notation throughout, they emphasize the connections between the individual theories. The reader’s understanding is then reinforced with exercises, solutions and topic summaries. Unique Features: Every topic is ...
Concluding theoretical remarks
International Nuclear Information System (INIS)
Ellis, J.
1986-01-01
My task in this talk is to review the happenings of this workshop from a theoretical perspective, and to emphasize lines for possible future research. My remarks are organized into a theoretical overview of the what, why, (mainly the hierarchy problem) how, (supersymmetry must be broken: softly or spontaneously, and if the latter, by means of a new U tilde(1) gauge group or through the chiral superfields) when (how heavy are supersymmetric partner particles in different types of theories) and where (can one find evidence for) supersymmetry. In the last part are discussed various ongoing and future searches for photinos γ tilde, gravitinos G tilde, the U vector boson, shiggses H tilde, squarks q tilde and sleptons l tilde, gluinos g tilde, winos W tilde and other gauginos, as well as hunts for indirect effects of supersymmetry, such as for example in baryon decay. Finally there is a little message of encouragement to our experimental colleagues, based on historical precedent. (orig.)
Friedrich, Harald
2017-01-01
This expanded and updated well-established textbook contains an advanced presentation of quantum mechanics adapted to the requirements of modern atomic physics. It includes topics of current interest such as semiclassical theory, chaos, atom optics and Bose-Einstein condensation in atomic gases. In order to facilitate the consolidation of the material covered, various problems are included, together with complete solutions. The emphasis on theory enables the reader to appreciate the fundamental assumptions underlying standard theoretical constructs and to embark on independent research projects. The fourth edition of Theoretical Atomic Physics contains an updated treatment of the sections involving scattering theory and near-threshold phenomena manifest in the behaviour of cold atoms (and molecules). Special attention is given to the quantization of weakly bound states just below the continuum threshold and to low-energy scattering and quantum reflection just above. Particular emphasis is laid on the fundamen...
Novel TPPO Based Maximum Power Point Method for Photovoltaic System
Directory of Open Access Journals (Sweden)
ABBASI, M. A.
2017-08-01
Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.
Maximum power point tracker for photovoltaic power plants
Arcidiacono, V.; Corsi, S.; Lambri, L.
The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.
Silicene: Recent theoretical advances
Lew Yan Voon, L. C.; Zhu, Jiajie; Schwingenschlö gl, Udo
2016-01-01
Silicene is a two-dimensional allotrope of silicon with a puckered hexagonal structure closely related to the structure of graphene and that has been predicted to be stable. To date, it has been successfully grown in solution (functionalized) and on substrates. The goal of this review is to provide a summary of recent theoretical advances in the properties of both free-standing silicene as well as in interaction with molecules and substrates, and of proposed device applications.
Silicene: Recent theoretical advances
Lew Yan Voon, L. C.
2016-04-14
Silicene is a two-dimensional allotrope of silicon with a puckered hexagonal structure closely related to the structure of graphene and that has been predicted to be stable. To date, it has been successfully grown in solution (functionalized) and on substrates. The goal of this review is to provide a summary of recent theoretical advances in the properties of both free-standing silicene as well as in interaction with molecules and substrates, and of proposed device applications.
MARKETING MIX THEORETICAL ASPECTS
Margarita Išoraitė
2016-01-01
Aim of article is to analyze marketing mix theoretical aspects. The article discusses that marketing mix is one of the main objectives of the marketing mix elements for setting objectives and marketing budget measures. The importance of each element depends not only on the company and its activities, but also on the competition and time. All marketing elements are interrelated and should be seen in the whole of their actions. Some items may have greater importance than others; it depends main...
Theoretical high energy physics
International Nuclear Information System (INIS)
Lee, T.D.
1990-05-01
This report discusses progress on theoretical high energy physics at Columbia University in New York City. Some of the topics covered are: Chern-Simons gauge field theories; dynamical fermion QCD calculations; lattice gauge theory; the standard model of weak and electromagnetic interactions; Boson-fermion model of cuprate superconductors; S-channel theory of superconductivity and axial anomaly and its relation to spin in the parton model
3. Theoretical Physics Division
International Nuclear Information System (INIS)
For the period September 1980 - Aug 1981, the studies in theoretical physics divisions have been compiled under the following headings: in nuclear physics, nuclear structure, nuclear reactions and intermediate energies; in particle physics, NN and NantiN interactions, dual topological unitarization, quark model and quantum chromodynamics, classical and quantum field theories, non linear integrable equations and topological preons and Grand unified theories. A list of publications, lectures and meetings is included [fr
Theoretical developments in SUSY
International Nuclear Information System (INIS)
Shifman, M.
2009-01-01
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I review theoretical developments of the recent years in non-perturbative supersymmetry. (orig.)
Theoretical developments in SUSY
Energy Technology Data Exchange (ETDEWEB)
Shifman, M. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2009-01-15
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I review theoretical developments of the recent years in non-perturbative supersymmetry. (orig.)
Theoretical Developments in SUSY
Shifman, M.
2009-01-01
I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I will review theoretical developments of the recent years in non-perturbative supersymmetry.
Energy Technology Data Exchange (ETDEWEB)
Knissel, Jens; Grossklos, Marc [Institut Wohnen und Umwelt GmbH, Darmstadt (Germany); Werner, Johannes [Ingenieurbuero fuer Energieberatung, Haustechnik und Oekologische Konzepte GbR (eboek), Tuebingen (Germany)
2011-05-15
In energy-efficient buildings with mechanical ventilation and heat recovery, the heat losses of ventilation of a building can be influenced by additional open windows. This causes a significant rise in the heating demand. The Drd method (pressure difference method) assumes that the negative pressure in a building after switching off the supply air fan will depend on whether all windows are closed, or at least one window is open. In the research project under consideration the determination of the position of the window aperture using the Drd method shall be developed further. The operating conditions of the Drd method is investigated theoretically. Questions of the required building tightness and plant characteristics are clarified.
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Maximum entropy principle and hydrodynamic models in statistical mechanics
International Nuclear Information System (INIS)
Trovato, M.; Reggiani, L.
2012-01-01
This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the
MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.
Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang
2018-02-02
The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Directory of Open Access Journals (Sweden)
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
Theoretical solid state physics
Haug, Albert
2013-01-01
Theoretical Solid State Physics, Volume 1 focuses on the study of solid state physics. The volume first takes a look at the basic concepts and structures of solid state physics, including potential energies of solids, concept and classification of solids, and crystal structure. The book then explains single-electron approximation wherein the methods for calculating energy bands; electron in the field of crystal atoms; laws of motion of the electrons in solids; and electron statistics are discussed. The text describes general forms of solutions and relationships, including collective electron i
Theoretical astrophysics an introduction
Bartelmann, Matthias
2013-01-01
A concise yet comprehensive introduction to the central theoretical concepts of modern astrophysics, presenting hydrodynamics, radiation, and stellar dynamics all in one textbook. Adopting a modular structure, the author illustrates a small number of fundamental physical methods and principles, which are sufficient to describe and understand a wide range of seemingly very diverse astrophysical phenomena and processes. For example, the formulae that define the macroscopic behavior of stellar systems are all derived in the same way from the microscopic distribution function. This function it
International Nuclear Information System (INIS)
Barrett, R.C.
1979-01-01
Nowadays the 'experimental' charge densities are produced with convincing error estimates due to new methods and techniques. In addition the accuracy of those experiments means that r.m.s. radii are known within a few hundredths of a fermi. Because of that accuracy the theorists are left far behind. In order to show which theoretical possiblities exist at the moment we will discuss the single particle shell model and the Hartree-Fock or mean field approximation. Corrections to the mean field approximation are described. Finally, some examples and conclusions are presented. (KBE)
Information theoretic preattentive saliency
DEFF Research Database (Denmark)
Loog, Marco
2011-01-01
Employing an information theoretic operational definition of bottom-up attention from the field of computational visual perception a very general expression for saliency is provided. As opposed to many of the current approaches to determining a saliency map there is no need for an explicit data...... of which features, image information is described. We illustrate our result by determining a few specific saliency maps based on particular choices of features. One of them makes the link with the mapping underlying well-known Harris interest points, which is a result recently obtained in isolation...
Theoretical high energy physics
International Nuclear Information System (INIS)
Lee, T.D.
1991-01-01
This report discusses theoretical research in high energy physics at Columbia University. Some of the research topics discussed are: quantum chromodynamics with dynamical fermions; lattice gauge theory; scattering of neutrinos by photons; atomic physics constraints on the properties of ultralight-ultraweak gauge bosons; black holes; Chern- Simons physics; S-channel theory of superconductivity; charged boson system; gluon-gluon interactions; high energy scattering in the presence of instantons; anyon physics; causality constraints on primordial magnetic manopoles; charged black holes with scalar hair; properties of Chern-Aimona-Higgs solitons; and extended inflationary universe
Shivamoggi, Bhimsen K
1998-01-01
"Although there are many texts and monographs on fluid dynamics, I do not know of any which is as comprehensive as the present book. It surveys nearly the entire field of classical fluid dynamics in an advanced, compact, and clear manner, and discusses the various conceptual and analytical models of fluid flow." - Foundations of Physics on the first edition. Theoretical Fluid Dynamics functions equally well as a graduate-level text and a professional reference. Steering a middle course between the empiricism of engineering and the abstractions of pure mathematics, the author focuses
Theoretical Optics An Introduction
Römer, Hartmann
2004-01-01
Starting from basic electrodynamics, this volume provides a solid, yet concise introduction to theoretical optics, containing topics such as nonlinear optics, light-matter interaction, and modern topics in quantum optics, including entanglement, cryptography, and quantum computation. The author, with many years of experience in teaching and research, goes way beyond the scope of traditional lectures, enabling readers to keep up with the current state of knowledge. Both content and presentation make it essential reading for graduate and phD students as well as a valuable reference for researche
Preliminary attempt on maximum likelihood tomosynthesis reconstruction of DEI data
International Nuclear Information System (INIS)
Wang Zhentian; Huang Zhifeng; Zhang Li; Kang Kejun; Chen Zhiqiang; Zhu Peiping
2009-01-01
Tomosynthesis is a three-dimension reconstruction method that can remove the effect of superimposition with limited angle projections. It is especially promising in mammography where radiation dose is concerned. In this paper, we propose a maximum likelihood tomosynthesis reconstruction algorithm (ML-TS) on the apparent absorption data of diffraction enhanced imaging (DEI). The motivation of this contribution is to develop a tomosynthesis algorithm in low-dose or noisy circumstances and make DEI get closer to clinic application. The theoretical statistical models of DEI data in physics are analyzed and the proposed algorithm is validated with the experimental data at the Beijing Synchrotron Radiation Facility (BSRF). The results of ML-TS have better contrast compared with the well known 'shift-and-add' algorithm and FBP algorithm. (authors)
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
Czech Academy of Sciences Publication Activity Database
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Energy Technology Data Exchange (ETDEWEB)
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
Theoretical Study of the Compound Parabolic Trough Solar Collector
Dr. Subhi S. Mahammed; Dr. Hameed J. Khalaf; Tadahmun A. Yassen
2012-01-01
Theoretical design of compound parabolic trough solar collector (CPC) without tracking is presented in this work. The thermal efficiency is obtained by using FORTRAN 90 program. The thermal efficiency is between (60-67)% at mass flow rate between (0.02-0.03) kg/s at concentration ratio of (3.8) without need to tracking system.The total and diffused radiation is calculated for Tikrit city by using theoretical equations. Good agreement between present work and the previous work.
Straathof, Adrie J.J.; Bampouli, A.
2017-01-01
Carbohydrates are the prevailing biomass components available for bio-based production. The most direct way to convert carbohydrates into commodity chemicals is by one-step conversion at maximum theoretical yield, such as by anaerobic fermentation without side product formation. Considering these
Working hard and working smart: Motivation and ability during typical and maximum performance
Klehe, U.-C.; Anderson, N.
2007-01-01
The distinction between what people can do (maximum performance) and what they will do (typical performance) has received considerable theoretical but scant empirical attention in industrial-organizational psychology. This study of 138 participants performing an Internet-search task offers an
Maximum-performance fiber-optic irradiation with nonimaging designs.
Fang, Y; Feuermann, D; Gordon, J M
1997-10-01
A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.
Directory of Open Access Journals (Sweden)
Arbutina Bojan
2011-01-01
Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.
Dark matter: Theoretical perspectives
International Nuclear Information System (INIS)
Turner, M.S.
1993-01-01
The author both reviews and makes the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that (i) there are no dark-matter candidates within the open-quotes standard modelclose quotes of particle physics, (ii) there are several compelling candidates within attractive extensions of the standard model of particle physics, and (iii) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for open-quotes new physics.close quotes The compelling candidates are a very light axion (10 -6 --10 -4 eV), a light neutrino (20--90 eV), and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. The author briefly mentions more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. 119 refs
Dark matter: Theoretical perspectives
International Nuclear Information System (INIS)
Turner, M.S.
1993-01-01
I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for ''new physics.'' The compelling candidates are: a very light axion ( 10 -6 eV--10 -4 eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos
Theoretical physics. Quantum mechanics
International Nuclear Information System (INIS)
Rebhan, Eckhard
2008-01-01
From the first in two comprehensive volumes appeared Theoretical Physics of the author by this after Mechanics and Electrodynamics also Quantum mechanics appears as thinner single volume. First the illustrative approach via wave mechanics is reproduced. The more abstract Hilbert-space formulation introduces the author later by postulates, which are because of the preceding wave mechanics sufficiently plausible. All concepts of quantum mechanics, which contradict often to the intuitive understanding formed by macroscopic experiences, are extensively discussed and made by means of many examples as well as problems - in the largest part provided with solutions - understandable. To the interpretation of quantum mechanics an extensive special chapter is dedicated. this book arose from courses on theoretical physics, which the author has held at the Heinrich-Heine University in Duesseldorf, and was in numerous repetitions fitted to the requirement of the studyings. it is so designed that it is also after the study suited as reference book or for the renewing. All problems are very thoroughly and such extensively studied that each step is separately reproducible. About motivation and good understandability is cared much
Dark matter: Theoretical perspectives
Energy Technology Data Exchange (ETDEWEB)
Turner, M.S. (Chicago Univ., IL (United States). Enrico Fermi Inst. Fermi National Accelerator Lab., Batavia, IL (United States))
1993-01-01
I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for new physics.'' The compelling candidates are: a very light axion ( 10[sup [minus]6] eV--10[sup [minus]4] eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos.
Dark matter: Theoretical perspectives
Energy Technology Data Exchange (ETDEWEB)
Turner, M.S. [Chicago Univ., IL (United States). Enrico Fermi Inst.]|[Fermi National Accelerator Lab., Batavia, IL (United States)
1993-01-01
I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for ``new physics.`` The compelling candidates are: a very light axion ( 10{sup {minus}6} eV--10{sup {minus}4} eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos.
Exact parallel maximum clique algorithm for general and protein graphs.
Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka
2013-09-23
A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Directory of Open Access Journals (Sweden)
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
PTree: pattern-based, stochastic search for maximum parsimony phylogenies.
Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C
2013-01-01
Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
DEFF Research Database (Denmark)
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...... of investment activities by the maximum likelihood method so that we can estimate the adjustment costs that occur in the year of the investment and the three following years. Our results show that investments are associated with significant adjustment costs, especially in the year in which the investment...
Maximum-likelihood estimation of recent shared ancestry (ERSA).
Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B
2011-05-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.
Stimulus-dependent maximum entropy models of neural population codes.
Directory of Open Access Journals (Sweden)
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
International Nuclear Information System (INIS)
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Maximum entropy reconstruction of spin densities involving non uniform prior
International Nuclear Information System (INIS)
Schweizer, J.; Ressouche, E.; Papoular, R.J.; Zheludev, A.I.
1997-01-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m(rvec r), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for ρ(rvec r) = m(rvec r). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing
Zipf's law, power laws and maximum entropy
International Nuclear Information System (INIS)
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
Maximum entropy models of ecosystem functioning
International Nuclear Information System (INIS)
Bertram, Jason
2014-01-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example
Maximum entropy models of ecosystem functioning
Energy Technology Data Exchange (ETDEWEB)
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Theoretical physics 3 electrodynamics
Nolting, Wolfgang
2016-01-01
This textbook offers a clear and comprehensive introduction to electrodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series. The first part of the book describes the interaction of electric charges and magnetic moments by introducing electro- and magnetostatics. The second part of the book establishes deeper understanding of electrodynamics with the Maxwell equations, quasistationary fields and electromagnetic fields. All sections are accompanied by a detailed introduction to the math needed. Ideally suited to undergraduate students with some grounding in classical and analytical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful Germa...
Theoretical physics 5 thermodynamics
Nolting, Wolfgang
2017-01-01
This concise textbook offers a clear and comprehensive introduction to thermodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, defining macroscopic variables, such as internal energy, entropy and pressure,together with thermodynamic principles. The first part of the book introduces the laws of thermodynamics and thermodynamic potentials. More complex themes are covered in the second part of the book, which describes phases and phase transitions in depth. Ideally suited to undergraduate students with some grounding in classical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this series cove...
Theoretical Molecular Biophysics
Scherer, Philipp
2010-01-01
"Theoretical Molecular Biophysics" is an advanced study book for students, shortly before or after completing undergraduate studies, in physics, chemistry or biology. It provides the tools for an understanding of elementary processes in biology, such as photosynthesis on a molecular level. A basic knowledge in mechanics, electrostatics, quantum theory and statistical physics is desirable. The reader will be exposed to basic concepts in modern biophysics such as entropic forces, phase separation, potentials of mean force, proton and electron transfer, heterogeneous reactions coherent and incoherent energy transfer as well as molecular motors. Basic concepts such as phase transitions of biopolymers, electrostatics, protonation equilibria, ion transport, radiationless transitions as well as energy- and electron transfer are discussed within the frame of simple models.
Maximum entropy analysis of liquid diffraction data
International Nuclear Information System (INIS)
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
International Nuclear Information System (INIS)
Anon.
1985-01-01
The theoretical physics program in the Physics Division at ORNL involves research in both nuclear and atomic physics. In nuclear physics there is extensive activity in the fields of direct nuclear reactions with light- and heavy-ion projectiles, the structure of nuclei far from stability and at elevated temperatures, and the microscopic and macroscopic description of heavy-ion dynamics, including the behavior of nuclear molecules and supernuclei. New research efforts in relativistic nuclear collisions and in the study of quark-gluon plasma have continued to grow this year. The atomic theory program deals with a variety of ionization, multiple-vacancy production, and charge-exchange processes. Many of the problems are selected because of their relevance to the magnetic fusion energy program. In addition, there is a joint atomic-nuclear theory effort to study positron production during the collision of two high-Z numbers, i.e., U+U. A new Distinguished Scientist program, sponsored jointly by the University of Tennessee and ORNL, has been initiated. Among the first appointments is G.F. Bertsch in theoretical physics. As a result of this appointment, Bertsch and an associated group of four theorists split their time between UT and ORNL. In addition, the State of Tennessee has established a significant budget to support the visits of outstanding scientists to the Joint Institute for Heavy Ion Research at ORNL. This budget should permit a significant improvement in the visitor program at ORNL. Finally, the Laboratory awarded a Wigner post-doctoral Appointment to a theorist who will work in the theory group of the Physics Division
A Maximum Resonant Set of Polyomino Graphs
Directory of Open Access Journals (Sweden)
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Automatic maximum entropy spectral reconstruction in NMR
International Nuclear Information System (INIS)
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Direct maximum parsimony phylogeny reconstruction from genotype data
Directory of Open Access Journals (Sweden)
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
International Nuclear Information System (INIS)
Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan
2015-01-01
This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers
Theoretical and Experimental Spectroscopic Analysis of Cyano-Substituted Styrylpyridine Compounds
Directory of Open Access Journals (Sweden)
Jorge Lopez-Cruz
2013-02-01
Full Text Available A combined theoretical and experimental study on the structure, infrared, UV-Vis and 1H NMR data of trans-2-(m-cyanostyrylpyridine, trans-2-[3-methyl-(m-cyanostyryl]pyridine and trans-4-(m-cyanostyrylpyridine is presented. The synthesis was carried out with an efficient Knoevenagel condensation using green chemistry conditions. Theoretical geometry optimizations and their IR spectra were carried out using the Density Functional Theory (DFT in both gas and solution phases. For theoretical UV-Vis and 1H NMR spectra, the Time-Dependent DFT (TD-DFT and the Gauge-Including Atomic Orbital (GIAO methods were used, respectively. The theoretical characterization matched the experimental measurements, showing a good correlation. The effect of cyano- and methyl- substituents, as well as of the N-atom position in the pyridine ring on the UV-Vis, IR and NMR spectra, was evaluated. The UV-Vis results showed no significant effect due to electron-withdrawing cyano- and electron-donating methyl-substituents. The N-atom position, however, caused a slight change in the maximum absorption wavelengths. The IR normal modes were assigned for the cyano- and methyl-groups. 1H NMR spectra showed the typical doublet signals due to protons in the trans position of a double bond. The theoretical characterization was visibly useful to assign accurately the signals in IR and 1H NMR spectra, as well as to identify the most probable conformation that could be present in the formation of the styrylpyridine-like compounds.
Theoretical Approaches to Coping
Directory of Open Access Journals (Sweden)
Sofia Zyga
2013-01-01
Full Text Available Introduction: Dealing with stress requires conscious effort, it cannot be perceived as equal to individual's spontaneous reactions. The intentional management of stress must not be confused withdefense mechanisms. Coping differs from adjustment in that the latter is more general, has a broader meaning and includes diverse ways of facing a difficulty.Aim: An exploration of the definition of the term "coping", the function of the coping process as well as its differentiation from other similar meanings through a literature review.Methodology: Three theoretical approaches of coping are introduced; the psychoanalytic approach; approaching by characteristics; and the Lazarus and Folkman interactive model.Results: The strategic methods of the coping approaches are described and the article ends with a review of the approaches including the functioning of the stress-coping process , the classificationtypes of coping strategies in stress-inducing situations and with a criticism of coping approaches.Conclusions: The comparison of coping in different situations is difficult, if not impossible. The coping process is a slow process, so an individual may select one method of coping under one set ofcircumstances and a different strategy at some other time. Such selection of strategies takes place as the situation changes.
Theoretical disagreement about law
Directory of Open Access Journals (Sweden)
Zdravković Miloš
2014-01-01
Full Text Available As the dominant direction of the study of legal phenomena, legal positivism has suffered criticisms above all from representatives of natural law. Nevertheless, the most complex criticism of legal positivism came from Ronald Dworkin. With the methodological criticism he formed in 'Law's Empire', Dworkin attacked the sole foundations of legal positivism and his main methodological assumptions. Quoting the first postulate of positivism, which understands the law as a fact, Dworkin claims that, if this comprehension is correct, there could be no dispute among jurists concerning the law, except if some of them make an empirical mistake while establishing facts. Since this is not the case, Dworkin proves that this is actually a theoretical disagreement which does not represent a disagreement about the law itself, but about its morality. On these grounds, he rejects the idea of law as a fact and claims that the law is an interpretive notion, which means that disagreements within jurisprudence are most frequently interpretative disagreements over criteria of legality, and not empirical disagreements over historic and social facts.
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
On the maximum drawdown during speculative bubbles
Rotundo, Giulia; Navarra, Mauro
2007-08-01
A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Conductivity maximum in a charged colloidal suspension
Energy Technology Data Exchange (ETDEWEB)
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Dynamical maximum entropy approach to flocking.
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Multiperiod Maximum Loss is time unit invariant.
Kovacevic, Raimund M; Breuer, Thomas
2016-01-01
Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Improved Maximum Parsimony Models for Phylogenetic Networks.
Van Iersel, Leo; Jones, Mark; Scornavacca, Celine
2018-05-01
Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.
Ancestral sequence reconstruction with Maximum Parsimony
Herbst, Lina; Fischer, Mareike
2017-01-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
[The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].
Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R
1996-02-01
To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.
Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems
Directory of Open Access Journals (Sweden)
Modestas Pikutis
2014-05-01
Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.
Objective Bayesianism and the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
Jon Williamson
2013-09-01
Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.
Flow Control in Wells Turbines for Harnessing Maximum Wave Power
Garrido, Aitor J.; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier
2018-01-01
Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness. PMID:29439408
Flow Control in Wells Turbines for Harnessing Maximum Wave Power.
Lekube, Jon; Garrido, Aitor J; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier
2018-02-10
Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness.
Favarel, C.; Champier, D.; Bédécarrats, J. P.; Kousksou, T.; Strub, F.
2012-06-01
According to the International Energy Agency, 1.4 billion people are without electricity in the poorest countries and 2.5 billion people rely on biomass to meet their energy needs for cooking in developing countries. The use of cooking stoves equipped with small thermoelectric generator to provide electricity for basic needs (LED, cell phone and radio charging device) is probably a solution for houses far from the power grid. The cost of connecting every house with a landline is a lot higher than dropping thermoelectric generator in each house. Thermoelectric generators have very low efficiency but for isolated houses, they might become really competitive. Our laboratory works in collaboration with plane`te-bois (a non governmental organization) which has developed energy-efficient multifunction (cooking and hot water) stoves based on traditional stoves designs. A prototype of a thermoelectric generator (Bismuth Telluride) has been designed to convert a small part of the energy heating the sanitary water into electricity. This generator can produce up to 10 watts on an adapted load. Storing this energy in a battery is necessary as the cooking stove only works a few hours each day. As the working point of the stove varies a lot during the use it is also necessary to regulate the electrical power. An electric DC DC converter has been developed with a maximum power point tracker (MPPT) in order to have a good efficiency of the electronic part of the thermoelectric generator. The theoretical efficiency of the MMPT converter is discussed. First results obtained with a hot gas generator simulating the exhaust of the combustion chamber of a cooking stove are presented in the paper.
Overall efficiencies for conversion of solar energy to a chemical fuel
Fish, J. D.
A complete and consistent scheme for determining the overall efficiency of a generalized process for the conversion of solar energy into a chemical fuel (e.g. hydrogen) is developed and applied to seven conversion processes: thermal, thermochemical, photovoltaic, photogalvanic, photoelectrolysis, photosynthesis and photochemical conversion. It is demonstrated that the overall efficiency of each of these processes is determined by ten common factors: maximum theoretical efficiency, inherent absorption losses, inherent internal losses, rate limiting effects, reflection losses, transmission losses, coverage losses, system construction requirements, parasitic losses and harvesting and conversion losses. Both state-of-the-art and optimistic values are assigned to each factor for each of the seven conversion processes. State-of-the-art overall efficiencies ranged from 5% for thermal conversion down to essentially zero for thermochemical. Optimistic values in the range of about 10 to 15% are calculated for several of the processes.
Theoretical Division progress report
International Nuclear Information System (INIS)
Cooper, N.G.
1979-04-01
This report presents highlights of activities in the Theoretical (T) Division from October 1976-January 1979. The report is divided into three parts. Part I presents an overview of the Division: its unique function at the Los Alamos Scientific Laboratory (LASL) and within the scientific community as a whole; the organization of personnel; the main areas of research; and a survey of recent T-Division initiatives. This overview is followed by a survey of the 13 groups within the Division, their main responsibilities, interests, and expertise, consulting activities, and recent scientific accomplisments. The remainder of the report, Parts II and III, is devoted to articles on selected research activities. Recent efforts on topics of immediate interest to energy and weapons programs at LASL and elsewhere are described in Part II, Major National Programs. Separate articles present T-Divison contributions to weapons research, reactor safety and reactor physics research, fusion research, laser isotope separation, and other energy research. Each article is a compilation of independent projects within T Division, all related to but addressing different aspects of the major program. Part III is organized by subject discipline, and describes recent scientific advances of fundamental interest. An introduction, defining the scope and general nature of T-Division efforts within a given discipline, is followed by articles on the research topics selected. The reporting is done by the scientists involved in the research, and an attempt is made to communicate to a general audience. Some data are given incidentally; more technical presentations of the research accomplished may be found among the 47 pages of references. 110 figures, 5 tables
TAD- THEORETICAL AERODYNAMICS PROGRAM
Barrowman, J.
1994-01-01
This theoretical aerodynamics program, TAD, was developed to predict the aerodynamic characteristics of vehicles with sounding rocket configurations. These slender, axisymmetric finned vehicle configurations have a wide range of aeronautical applications from rockets to high speed armament. Over a given range of Mach numbers, TAD will compute the normal force coefficient derivative, the center-of-pressure, the roll forcing moment coefficient derivative, the roll damping moment coefficient derivative, and the pitch damping moment coefficient derivative of a sounding rocket configured vehicle. The vehicle may consist of a sharp pointed nose of cone or tangent ogive shape, up to nine other body divisions of conical shoulder, conical boattail, or circular cylinder shape, and fins of trapezoid planform shape with constant cross section and either three or four fins per fin set. The characteristics computed by TAD have been shown to be accurate to within ten percent of experimental data in the supersonic region. The TAD program calculates the characteristics of separate portions of the vehicle, calculates the interference between separate portions of the vehicle, and then combines the results to form a total vehicle solution. Also, TAD can be used to calculate the characteristics of the body or fins separately as an aid in the design process. Input to the TAD program consists of simple descriptions of the body and fin geometries and the Mach range of interest. Output includes the aerodynamic characteristics of the total vehicle, or user-selected portions, at specified points over the mach range. The TAD program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 123K of 8 bit bytes. The TAD program was originally developed in 1967 and last updated in 1972.
Theoretical molecular biophysics
Scherer, Philipp O J
2017-01-01
This book gives an introduction to molecular biophysics. It starts from material properties at equilibrium related to polymers, dielectrics and membranes. Electronic spectra are developed for the understanding of elementary dynamic processes in photosynthesis including proton transfer and dynamics of molecular motors. Since the molecular structures of functional groups of bio-systems were resolved, it has become feasible to develop a theory based on the quantum theory and statistical physics with emphasis on the specifics of the high complexity of bio-systems. This introduction to molecular aspects of the field focuses on solvable models. Elementary biological processes provide as special challenge the presence of partial disorder in the structure which does not destroy the basic reproducibility of the processes. Apparently the elementary molecular processes are organized in a way to optimize the efficiency. Learning from nature by means exploring the relation between structure and function may even help to b...
Stochastic efficiency: five case studies
International Nuclear Information System (INIS)
Proesmans, Karel; Broeck, Christian Van den
2015-01-01
Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)
A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)
Energy Technology Data Exchange (ETDEWEB)
Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)
2007-03-15
Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)
Corrosion Inhibition of Copper-nickel Alloy: Experimental and Theoretical Studies
Energy Technology Data Exchange (ETDEWEB)
Khadom, Anees A. [Univ. of Daiyla, Baquba (Iran, Islamic Republic of); Yaro, Aprael S. [Univ. of Baghdad, Aljadreaa (Iran, Islamic Republic of); Musa, Ahmed Y.; Mohamad, Abu Bakar; Kadhum, Abdul Amir H. [UniversitiKebangsaan Malaysia, Bangi (Malaysia)
2012-08-15
The corrosion inhibition of copper-nickel alloy by Ethylenediamine (EDA) and Diethylenetriamine (DETA) in 1.5M HCl has been investigated by weight loss technique at different temperatures. Maximum value of inhibitor efficiency was 75% at 35 .deg. C and 0.2 M inhibitor concentration EDA, while the lower value was 4% at 35 .deg. C and 0.01 M inhibitor concentration DETA. Two mathematical models were used to represent the corrosion rate data, second order polynomial model and exponential model respectively. Nonlinear regression analysis showed that the first model was better than the second model with high correlation coefficient. The reactivity of studied inhibitors was analyzed through theoretical calculations based on density functional theory (DFT). The results showed that the reactive sites were located on the nitrogen (N1, N2 and N4) atoms.
Directory of Open Access Journals (Sweden)
Ali Ramazani
2016-01-01
Full Text Available A general synthetic route for the synthesis of 1,2-dihydro-1-aryl-naphtho[1,2-e][1,3]oxazine-3-one derivatives has been developed using perlite-SO3H nanoparticles as efficient catalyst under both microwave-assisted and thermal solvent-free conditions. The combination of 2-naphthol, aldehyde and urea enabled the synthesis of 1,2-dihydro-1-aryl-naphtho[1,2-e][1,3]oxazine-3-one derivatives in the presence of perlite-SO3H nanoparticles in good to excellent yields. This method provides several advantages like simple work-up, environmentally benign, and shorter reaction times along with high yields. In order to explore the recyclability of the catalyst, the perlite-SO3H nanoparticles in solvent-free conditions were used as catalyst for the same reaction repeatedly and the change in their catalytic activity was studied. It was found that perlite-SO3H nanoparticles could be reused for four cycles with negligible loss of their activity. Single crystal X-ray structure analysis and theoretical studies also were investigated for 4i product. The electronic properties of the compound have been analyzed using DFT calculations (B3LYP/6-311+G*. The FMO analysis suggests that charge transfer takes place within the molecule and the HOMO is localized mainly on naphthalene and oxazinone rings whereas the LUMO resides on the naphthalene ring.
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Processing of fibre composites - challenges for maximum materials performance
Energy Technology Data Exchange (ETDEWEB)
Madsen, B.; Lilholt, H.; Kusano, Y.; Faester, S.; Ralph, B. (eds.)
2013-09-01
The Proceedings book contains 9 invited papers and 26 contributed papers. Among the specific topics covered by the papers are (1) experiments and theoretical models for the analysis of resin rheology, resin cure kinetics, together with fibre assembly permeability for efficient mould filling, and optimisation of process conditions, (2) modelling of residual stresses generated during processing for control and minimization of shape distortion, (3) design and characterisation of the fibre/matrix interface, and the related technological techniques for surface treatment, (4) development of techniques and analyses for the characterisation of the process controlled volumetric composition and microstructural parameters, such as length and orientation of fibres, and the related effect on composite properties, (5) compaction behaviour of fibre assemblies, (6) new types of fibres and matrices, such as bio-based and at nano-scale, and their processing and properties, and finally, (7) comparative studies and systems for selection, monitoring and control of processes. (LN)
Maximum Profit Configurations of Commercial Engines
Directory of Open Access Journals (Sweden)
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
A maximum power point tracking algorithm for buoy-rope-drum wave energy converters
Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.
2016-08-01
The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.
DEFF Research Database (Denmark)
Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan
2018-01-01
This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
Small scale wind energy harvesting with maximum power tracking
Directory of Open Access Journals (Sweden)
Joaquim Azevedo
2015-07-01
Full Text Available It is well-known that energy harvesting from wind can be used to power remote monitoring systems. There are several studies that use wind energy in small-scale systems, mainly with wind turbine vertical axis. However, there are very few studies with actual implementations of small wind turbines. This paper compares the performance of horizontal and vertical axis wind turbines for energy harvesting on wireless sensor network applications. The problem with the use of wind energy is that most of the time the wind speed is very low, especially at urban areas. Therefore, this work includes a study on the wind speed distribution in an urban environment and proposes a controller to maximize the energy transfer to the storage systems. The generated power is evaluated by simulation and experimentally for different load and wind conditions. The results demonstrate the increase in efficiency of wind generators that use maximum power transfer tracking, even at low wind speeds.
New algorithm using only one variable measurement applied to a maximum power point tracker
Energy Technology Data Exchange (ETDEWEB)
Salas, V.; Olias, E.; Lazaro, A.; Barrado, A. [University Carlos III de Madrid (Spain). Dept. of Electronic Technology
2005-05-01
A novel algorithm for seeking the maximum power point of a photovoltaic (PV) array for any temperature and solar irradiation level, needing only the PV current value, is proposed. Satisfactory theoretical and experimental results are presented and were obtained when the algorithm was included on a 100 W 24 V PV buck converter prototype, using an inexpensive microcontroller. The load of the system used was a battery and a resistance. The main advantage of this new maximum power point tracking (MPPT), when is compared with others, is that it only uses the measurement of the photovoltaic current, I{sub PV}. (author)
Measurement of the Barkas effect around the stopping-power maximum for light and heavy targets
International Nuclear Information System (INIS)
Moeller, S.P.; Knudsen, H.; Mikkelsen, U.; Paludan, K.; Morenzoni, E.
1997-01-01
The first direct measurements of antiproton stopping powers around the stopping power maximum are presented. The LEAR antiproton-beam of 5.9 MeV is degraded to 50-700 keV, and the energy-loss is found by measuring the antiproton velocity before and after the target. The antiproton stopping powers of Si and Au are found to be reduced by 30 and 40% near the electronic stopping power maximum as compared to the equivalent proton stopping power. The Barkas effect, that is the stopping power difference between protons and antiprotons, is extracted and compared to theoretical estimates. (orig.)
Hydrodynamic Relaxation of an Electron Plasma to a Near-Maximum Entropy State
International Nuclear Information System (INIS)
Rodgers, D. J.; Servidio, S.; Matthaeus, W. H.; Mitchell, T. B.; Aziz, T.; Montgomery, D. C.
2009-01-01
Dynamical relaxation of a pure electron plasma in a Malmberg-Penning trap is studied, comparing experiments, numerical simulations and statistical theories of weakly dissipative two-dimensional (2D) turbulence. Simulations confirm that the dynamics are approximated well by a 2D hydrodynamic model. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state with constrained energy, circulation, and angular momentum. This provides evidence that 2D electron fluid relaxation in a turbulent regime is governed by principles of maximum entropy.
Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175
Radiation pressure acceleration: The factors limiting maximum attainable ion energy
Energy Technology Data Exchange (ETDEWEB)
Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)
2016-05-15
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Directory of Open Access Journals (Sweden)
Xi Liu
2016-09-01
Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
A maximum power point tracking algorithm for photovoltaic applications
Nelatury, Sudarshan R.; Gray, Robert
2013-05-01
The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.
Surface physics theoretical models and experimental methods
Mamonova, Marina V; Prudnikova, I A
2016-01-01
The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...
Su, Luning; Li, Wei; Wu, Mingxuan; Su, Yun; Guo, Chongling; Ruan, Ningjuan; Yang, Bingxin; Yan, Feng
2017-08-01
Lobster-eye optics is widely applied to space x-ray detection missions and x-ray security checks for its wide field of view and low weight. This paper presents a theoretical model to obtain spatial distribution of focusing efficiency based on lobster-eye optics in a soft x-ray wavelength. The calculations reveal the competition mechanism of contributions to the focusing efficiency between the geometrical parameters of lobster-eye optics and the reflectivity of the iridium film. In addition, the focusing efficiency image depending on x-ray wavelengths further explains the influence of different geometrical parameters of lobster-eye optics and different soft x-ray wavelengths on focusing efficiency. These results could be beneficial to optimize parameters of lobster-eye optics in order to realize maximum focusing efficiency.
Hu, Beibei; Shi, Haifeng; Zhang, Yixin
2018-06-01
We theoretically study the fiber-coupling efficiency of Gaussian-Schell model beams propagating through oceanic turbulence. The expression of the fiber-coupling efficiency is derived based on the spatial power spectrum of oceanic turbulence and the cross-spectral density function. Our work shows that the salinity fluctuation has a greater impact on the fiber-coupling efficiency than temperature fluctuation does. We can select longer λ in the "ocean window" and higher spatial coherence of light source to improve the fiber-coupling efficiency of the communication link. We also can achieve the maximum fiber-coupling efficiency by choosing design parameter according specific oceanic turbulence condition. Our results are able to help the design of optical communication link for oceanic turbulence to fiber sensor.
International Nuclear Information System (INIS)
2010-01-01
After a speech of the CEA's (Commissariat a l'Energie Atomique) general administrator about energy efficiency as a first rank challenge for the planet and for France, this publications proposes several contributions: a discussion of the efficiency of nuclear energy, an economic analysis of R and D's value in the field of fourth generation fast reactors, discussions about biofuels and the relationship between energy efficiency and economic competitiveness, and a discussion about solar photovoltaic efficiency
Maximum wind energy extraction strategies using power electronic converters
Wang, Quincy Qing
2003-10-01
This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through
The power of theoretical knowledge.
Alligood, Martha Raile
2011-10-01
Nursing theoretical knowledge has demonstrated powerful contributions to education, research, administration and professional practice for guiding nursing thought and action. That knowledge has shifted the primary focus of the nurse from nursing functions to the person. Theoretical views of the person raise new questions, create new approaches and instruments for nursing research, and expand nursing scholarship throughout the world.
Maximum mass of magnetic white dwarfs
International Nuclear Information System (INIS)
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Energy Technology Data Exchange (ETDEWEB)
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Mammographic image restoration using maximum entropy deconvolution
International Nuclear Information System (INIS)
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Maximum Margin Clustering of Hyperspectral Data
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Paving the road to maximum productivity.
Holland, C
1998-01-01
"Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.
Maximum power flux of auroral kilometric radiation
International Nuclear Information System (INIS)
Benson, R.F.; Fainberg, J.
1991-01-01
The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3
Ancestral Sequence Reconstruction with Maximum Parsimony.
Herbst, Lina; Fischer, Mareike
2017-12-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.
Preliminary application of maximum likelihood method in HL-2A Thomson scattering system
International Nuclear Information System (INIS)
Yao Ke; Huang Yuan; Feng Zhen; Liu Chunhua; Li Enping; Nie Lin
2010-01-01
Maximum likelihood method to process the data of HL-2A Thomson scattering system is presented. Using mathematical statistics, this method maximizes the possibility of the likeness between the theoretical data and the observed data, so that we could get more accurate result. It has been proved to be applicable in comparison with that of the ratios method, and some of the drawbacks in ratios method do not exist in this new one. (authors)
Radiotherapy problem under fuzzy theoretic approach
International Nuclear Information System (INIS)
Ammar, E.E.; Hussein, M.L.
2003-01-01
A fuzzy set theoretic approach is used for radiotherapy problem. The problem is faced with two goals: the first is to maximize the fraction of surviving normal cells and the second is to minimize the fraction of surviving tumor cells. The theory of fuzzy sets has been employed to formulate and solve the problem. A linguistic variable approach is used for treating the first goal. The solutions obtained by the modified approach are always efficient and best compromise. A sensitivity analysis of the solutions to the differential weights is given
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...
Nanocrystalline dye-sensitized solar cells having maximum performance
Energy Technology Data Exchange (ETDEWEB)
Kroon, M.; Bakker, N.J.; Smit, H.J.P. [ECN Solar Energy, Petten (Netherlands); Liska, P.; Thampi, K.R.; Wang, P.; Zakeeruddin, S.M.; Graetzel, M. [LPI-ISIC, Ecole Polytechnique Federale de Lausanne EPFL, Station 6, CH-1015 Lausanne (Switzerland); Hinsch, A. [Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr.2, D-79110 Freiburg (Germany); Hore, S.; Wuerfel, U.; Sastrawan, R. [Freiburg Materials Research Centre FMF, Stefan-Meier Str. 21, 79104 Freiburg (Germany); Durrant, J.R.; Palomares, E. [Centre for Electronic Materials and Devices, Department of Chemistry, Imperial College London, Exhibition road SW7 2AY (United Kingdom); Pettersson, H.; Gruszecki, T. [IVF Industrial Research and Development Corporation, Argongatan 30, SE-431 53 Moelndal (Sweden); Walter, J.; Skupien, K. [Cracow University of Technology CUTECH, Jana Pawla II 37, 31-864 Cracow (Poland); Tulloch, G.E. [Greatcell Solar SA GSA, Ave Henry-Warnery 4, 1006 Lausanne (Switzerland)
2007-01-15
This paper presents an overview of the research carried out by a European consortium with the aim to develop and test new and improved ways to realise dye-sensitized solar cells (DSC) with enhanced efficiencies and stabilities. Several new areas have been explored in the field of new concepts and materials, fabrication protocols for TiO2 and scatterlayers, metal oxide blocking layers, strategies for co-sensitization and low temperature processes of platinum deposition. Fundamental understanding of the working principles has been gained by means of electrical and optical modelling and advanced characterization techniques. Cost analyses have been made to demonstrate the potential of DSC as a low cost thin film PV technology. The combined efforts have led to maximum non-certified power conversion efficiencies under full sunlight of 11% for areas <0c2 cm{sup 2} and 10c1% for a cell with an active area of 1c3 cm{sup 2}. Lifetime studies revealed negligible device degradation after 1000 hrs of accelerated tests under thermal stress at 80C in the dark and visible light soaking at 60C. An outlook summarizing future directions in the research and large-scale production of DSC is presented.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Half-width at half-maximum, full-width at half-maximum analysis
Indian Academy of Sciences (India)
addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.
Gall, P. D.
1984-01-01
Improving the aerodynamic characteristics of an airplane with respect to maximizing lift and minimizing induced and parasite drag are of primary importance in designing lighter, faster, and more efficient aircraft. Previous research has shown that a properly designed biplane wing system can perform superiorly to an equivalent monoplane system with regard to maximizing the lift-to-drag ratio and efficiency factor. Biplanes offer several potential advantages over equivalent monoplanes, such as a 60-percent reduction in weight, greater structural integrity, and increased roll response. The purpose of this research is to examine, both theoretically and experimentally, the possibility of further improving the aerodynamic characteristics of the biplanes configuration by adding winglets. Theoretical predictions were carried out utilizing vortex-lattice theory, which is a numerical method based on potential flow theory. Experimental data were obtained by testing a model in the Pennsylvania State University's subsonic wind tunnel at a Reynolds number of 510,000. The results showed that the addition of winglets improved the performance of the biplane with respect to increasing the lift-curve slope, increasing the maximum lift coefficient, increasing the efficiency factor, and decreasing the induced drag. A listing of the program is included in the Appendix.
Vehicle efficiency and agriculture transport in Ghana
International Nuclear Information System (INIS)
Delaquis, M.R.
1993-01-01
The vehicle operating cost (VOC) associated with the transportation of agricultural commodities in Ghana is studied, using the Kumasi and Ashanti region as a case study. The present state of the agriculture sector is described in terms of three interactive systems: the transport system, the agriculture system, and the flow pattern of vehicles and commodities. A survey is used as an information base to construct a total operating cost (TOC) model based on average actual operating conditions. The TOC model is expanded to include costs under three theoretical operating conditions: enforced loading, maximum vehicle utilization, and increased fuel efficiency. Three options identified as potentially beneficial to the transport industry and the Ghanian economy are presented and evaluated: using larger vehicles, maximizing vehicle utilization, and increasing fuel economy. The effects of implementation on the parties involved (producers, transport owners and operators, transport organizations and government) are taken into account. It is recommended that the Ghanian government institute the following programs and policies: enforce registered loading allowance; encourage higher vehicle utilization by controlling the number of vehicles registered and ensuring adequate service; and encourage use of larger vehicles. The benefits of using foreign aid to effect fleet and operational changes rather than focusing on capital-intensive infrastructure improvements to improve transport efficiency are recommended. 30 refs., 28 figs., 23 tabs
Energy Technology Data Exchange (ETDEWEB)
Vahala, George M. [College of William and Mary, Williamsburg, VA (United States)
2013-12-31
with the electric field only being about three times higher than in the ideal case. Moreover, the quasi-optical grill was significantly fewer structural elements that the multijunction grill. Nevertheless there has not been much interest from experimental fusion groups to implementing these structures. Hence we have returned to optimizing the multijunction grill so that the large number of coupling matrix elements can be efficiently evaluated using symmetry arguments. In overdense plasmas, the standard electromagnetic waves cannot propagate into the plasma center, but are reflected at the plasma edge. By optimizing mode conversion processes (in particular, the O-X-B wave propagation of Ordinary Mode converting to an Extraordinary mode which then converts into an electrostatic Bernstein wave) one can excite within the plasma an electrostatic Bernstein wave that does not suffer density cutoffs and is absorbed on the electron cyclotron harmonics. Finally we have started looking at other mesoscopic lattice algorithms that involve unitary collision and streaming steps. Because these algorithms are unitary they can be run on quantum computers when they become available – unlike their computational cousin of lattice Boltzmann which is a purely classical code. These quantum lattice gas algorithms have been tested successfully on exact analytic soliton collision solution. These calculations are hoped to be able to study Bose Einstein condensed atomic gases and their ground states in an optical lattice.
Examination of Maximum Power Point Tracking on the EV for Installing on Windmill
雪田, 和人; 細江, 忠司; 小田切, 雄也; 後藤, 泰之; 一柳, 勝宏
2006-01-01
This paper proposes that wind generator system is operated by using wind collection equipment and Maximum Power Point Tracking more and more high-efficient. As an example of the utility, it was proposed that it was used for the regeneration of electric vehicle. The efficiency upgrading of electric vehicle can be expect by introducing in addition, proposing system with the conventional regeneration. The field experiment was carried out in order to measure the effect. Regeneration energy by pro...
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
International Nuclear Information System (INIS)
Gholampour, Maysam; Ameri, Mehran
2016-01-01
Highlights: • A Photovoltaic/Thermal flat transpired collector was theoretically and experimentally studied. • Performance of PV/Thermal flat transpired plate was evaluated using equivalent thermal, first, and second law efficiencies. • According to the actual exergy gain, a critical radiation level was defined and its effect was investigated. • As an appropriate tool, equivalent thermal efficiency was used to find optimum suction velocity and PV coverage percent. - Abstract: PV/Thermal flat transpired plate is a kind of air-based hybrid Photovoltaic/Thermal (PV/T) system concurrently producing both thermal and electrical energy. In order to develop a predictive model, validate, and investigate the PV/Thermal flat transpired plate capabilities, a prototype was fabricated and tested under outdoor conditions at Shahid Bahonar University of Kerman in Kerman, Iran. In order to develop a mathematical model, correlations for Nusselt numbers for PV panel and transpired plate were derived using CFD technique. Good agreement was obtained between measured and simulated values, with the maximum relative root mean square percent deviation (RMSE) being 9.13% and minimum correlation coefficient (R-squared) 0.92. Based on the critical radiation level defined in terms of the actual exergy gain, it was found that with proper fan and MPPT devices, there is no concern about the critical radiation level. To provide a guideline for designers, using equivalent thermal efficiency as an appropriate tool, optimum values for suction velocity and PV coverage percent under different conditions were obtained.
International Nuclear Information System (INIS)
Chintala, Venkateswarlu; Subramanian, K.A.
2014-01-01
This work is aimed at study of maximum available work and irreversibility (mixing, combustion, unburned, and friction) of a dual-fuel diesel engine (H 2 (hydrogen)–diesel) using exergy analysis. The maximum available work increased with H 2 addition due to reduction in irreversibility of combustion because of less entropy generation. The irreversibility of unburned fuel with the H 2 fuel also decreased due to the engine combustion with high temperature whereas there is no effect of H 2 on mixing and friction irreversibility. The maximum available work of the diesel engine at rated load increased from 29% with conventional base mode (without H 2 ) to 31.7% with dual-fuel mode (18% H 2 energy share) whereas total irreversibility of the engine decreased drastically from 41.2% to 39.3%. The energy efficiency of the engine with H 2 increased about 10% with 36% reduction in CO 2 emission. The developed methodology could also be applicable to find the effect and scope of different technologies including exhaust gas recirculation and turbo charging on maximum available work and energy efficiency of diesel engines. - Highlights: • Energy efficiency of diesel engine increases with hydrogen under dual-fuel mode. • Maximum available work of the engine increases significantly with hydrogen. • Combustion and unburned fuel irreversibility decrease with hydrogen. • No significant effect of hydrogen on mixing and friction irreversibility. • Reduction in CO 2 emission along with HC, CO and smoke emissions
Dermoune, Azzouz; Simon, Eric Pierre
2017-12-01
This paper is a theoretical analysis of the maximum likelihood (ML) channel estimator for orthogonal frequency-division multiplexing (OFDM) systems in the presence of unknown interference. The following theoretical results are presented. Firstly, the uniqueness of the ML solution for practical applications, i.e., when thermal noise is present, is analytically demonstrated when the number of transmitted OFDM symbols is strictly greater than one. The ML solution is then derived from the iterative conditional ML (CML) algorithm. Secondly, it is shown that the channel estimate can be described as an algebraic function whose inputs are the initial value and the means and variances of the received samples. Thirdly, it is theoretically demonstrated that the channel estimator is not biased. The second and the third results are obtained by employing oblique projection theory. Furthermore, these results are confirmed by numerical results.
Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator
Byun, Kyung-Eun; Lee, Min-Hyun; Cho, Yeonchoo; Nam, Seung-Geol; Shin, Hyeon-Jin; Park, Seongjun
2017-07-01
Although triboelectric nanogenerator (TENG) has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.
Theoretical chemistry advances and perspectives
Eyring, Henry
1980-01-01
Theoretical Chemistry: Advances and Perspectives, Volume 5 covers articles concerning all aspects of theoretical chemistry. The book discusses the mean spherical approximation for simple electrolyte solutions; the representation of lattice sums as Mellin-transformed products of theta functions; and the evaluation of two-dimensional lattice sums by number theoretic means. The text also describes an application of contour integration; a lattice model of quantum fluid; as well as the computational aspects of chemical equilibrium in complex systems. Chemists and physicists will find the book usef
Theoretical Study of the Compound Parabolic Trough Solar Collector
Directory of Open Access Journals (Sweden)
Dr. Subhi S. Mahammed
2012-06-01
Full Text Available Theoretical design of compound parabolic trough solar collector (CPC without tracking is presented in this work. The thermal efficiency is obtained by using FORTRAN 90 program. The thermal efficiency is between (60-67% at mass flow rate between (0.02-0.03 kg/s at concentration ratio of (3.8 without need to tracking system.The total and diffused radiation is calculated for Tikrit city by using theoretical equations. Good agreement between present work and the previous work.
High-Efficiency Quantum Interrogation Measurements via the Quantum Zeno Effect
International Nuclear Information System (INIS)
Kwiat, P. G.; White, A. G.; Mitchell, J. R.; Nairz, O.; Weihs, G.; Weinfurter, H.; Zeilinger, A.
1999-01-01
The phenomenon of quantum interrogation allows one to optically detect the presence of an absorbing object, without the measuring light interacting with it. In an application of the quantum Zeno effect, the object inhibits the otherwise coherent evolution of the light, such that the probability that an interrogating photon is absorbed can in principle be arbitrarily small. We have implemented this technique, achieving efficiencies of up to 73% , and consequently exceeding the 50% theoretical maximum of the original ''interaction-free'' measurement proposal. We have also predicted and experimentally verified a previously unsuspected dependence on loss. (c) 1999 The American Physical Society
Maximum power point tracking for photovoltaic solar pump based on ANFIS tuning system
Directory of Open Access Journals (Sweden)
S. Shabaan
2018-05-01
Full Text Available Solar photovoltaic (PV systems are a clean and naturally replenished energy source. PV panels have a unique point which represents the maximum available power and this point depend on the environmental conditions such as temperature and irradiance. A maximum power point tracking (MPPT is therefore necessary for maximum efficiency. In this paper, a study of MPPT for PV water pumping system based on adaptive neuro-fuzzy inference system (ANFIS is discussed. A comparison between the performance of the system with and without MPPT is carried out under varying irradiation and temperature conditions. ANFIS based controller shows fast response with high efficiency at all irradiance and temperature levels making it a powerful technique for non-linear systems as PV modules. Keywords: MPPT, ANFIS, Boost converter, PMDC pump
A maximum power point tracking scheme for a 1kw stand-alone ...
African Journals Online (AJOL)
A maximum power point tracking scheme for a 1kw stand-alone solar energy based power supply. ... Nigerian Journal of Technology ... A method for efficiently maximizing the output power of a solar panel supplying a load or battery bus under ...
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...
Maximum entropy production rate in quantum thermodynamics
Energy Technology Data Exchange (ETDEWEB)
Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)
2010-06-01
In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible
Determination of the maximum-depth to potential field sources by a maximum structural index method
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Reconstructing phylogenetic networks using maximum parsimony.
Nakhleh, Luay; Jin, Guohua; Zhao, Fengmei; Mellor-Crummey, John
2005-01-01
Phylogenies - the evolutionary histories of groups of organisms - are one of the most widely used tools throughout the life sciences, as well as objects of research within systematics, evolutionary biology, epidemiology, etc. Almost every tool devised to date to reconstruct phylogenies produces trees; yet it is widely understood and accepted that trees oversimplify the evolutionary histories of many groups of organims, most prominently bacteria (because of horizontal gene transfer) and plants (because of hybrid speciation). Various methods and criteria have been introduced for phylogenetic tree reconstruction. Parsimony is one of the most widely used and studied criteria, and various accurate and efficient heuristics for reconstructing trees based on parsimony have been devised. Jotun Hein suggested a straightforward extension of the parsimony criterion to phylogenetic networks. In this paper we formalize this concept, and provide the first experimental study of the quality of parsimony as a criterion for constructing and evaluating phylogenetic networks. Our results show that, when extended to phylogenetic networks, the parsimony criterion produces promising results. In a great majority of the cases in our experiments, the parsimony criterion accurately predicts the numbers and placements of non-tree events.
Directory of Open Access Journals (Sweden)
Y. Labbi
2015-08-01
Full Text Available Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency.In this work, a Particle Swarm Optimization (PSO is proposed for maximum power point tracker for photovoltaic panel, are used to generate the optimal MPP, such that solar panel maximum power is generated under different operating conditions. A photovoltaic system including a solar panel and PSO MPP tracker is modelled and simulated, it has been has been carried out which has shown the effectiveness of PSO to draw much energy and fast response against change in working conditions.
Theoretical tools for B physics
International Nuclear Information System (INIS)
Mannel, T.
2006-01-01
In this talk I try to give an overview over the theoretical tools used to compute observables in B physics. The main focus is the developments in the 1/m Expansion in semileptonic and nonleptonic decays. (author)
Theoretical approaches to elections defining
Natalya V. Lebedeva
2011-01-01
Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.
Theoretical approaches to elections defining
Directory of Open Access Journals (Sweden)
Natalya V. Lebedeva
2011-01-01
Full Text Available Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.
Theoretical Linguistics And Multilingualism Research
African Journals Online (AJOL)
KATEVG
This paper tries to construct a bridge between the concerns of theoretical ... released the legendary song with the singular bridge over forty years ago): .... Another set of cases concerns the frozen forms pass and fail, which occur without any.
Theoretical Principles of Distance Education.
Keegan, Desmond, Ed.
This book contains the following papers examining the didactic, academic, analytic, philosophical, and technological underpinnings of distance education: "Introduction"; "Quality and Access in Distance Education: Theoretical Considerations" (D. Randy Garrison); "Theory of Transactional Distance" (Michael G. Moore);…
Comparison between theoretical predictions and tracking
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, A.G.
1985-01-01
The beam-beam interaction in a proton-antiproton collider has been an outstanding issue for a long time. Several theoretical predictions have been made in the past which range from the appearance of single beam-beam driven resonances to the onset of stochasticity and Arnold diffusion and the presence of chaotic trajectories. All these effects would cause a limit on the maximum strength of the beam-beam interaction, the so called beam-beam tune-shift, and speculative values have been offered ranging from as low as 0.0005 to as large as a fraction of unit. The lower limit could be caused in a more complicated situation where the external focussing forces which keep the two beams in the same storage ring are also modulated in time. These theoretical predictions have been compared with extensive computer tracking where the motion of the particles is followed turn after turn over very long periods of time. Though it is indeed possible to observe the formation of several resonances, nevertheless the onset of connected stochasticity seems to occur at too large beam-beam tune-shift to be of any practical relevance. Moreover no Arnold diffusion has been observed to have any practical significance. Chaotic trajectories have been found to embed the phase space in disconnected regions of appreciable extension. They increase in numbers considerably when time modulation of external focussing forces is added. 15 refs., 18 figs.
Comparison between theoretical predictions and tracking
International Nuclear Information System (INIS)
Ruggiero, A.G.
1985-01-01
The beam-beam interaction in a proton-antiproton collider has been an outstanding issue for a long time. Several theoretical predictions have been made in the past which range from the appearance of single beam-beam driven resonances to the onset of stochasticity and Arnold diffusion and the presence of chaotic trajectories. All these effects would cause a limit on the maximum strength of the beam-beam interaction, the so called beam-beam tune-shift, and speculative values have been offered ranging from as low as 0.0005 to as large as a fraction of unit. The lower limit could be caused in a more complicated situation where the external focussing forces which keep the two beams in the same storage ring are also modulated in time. These theoretical predictions have been compared with extensive computer tracking where the motion of the particles is followed turn after turn over very long periods of time. Though it is indeed possible to observe the formation of several resonances, nevertheless the onset of connected stochasticity seems to occur at too large beam-beam tune-shift to be of any practical relevance. Moreover no Arnold diffusion has been observed to have any practical significance. Chaotic trajectories have been found to embed the phase space in disconnected regions of appreciable extension. They increase in numbers considerably when time modulation of external focussing forces is added. 15 refs., 18 figs
3D Navier-Stokes simulations of a rotor designed for maximum aerodynamic efficiency
DEFF Research Database (Denmark)
Johansen, Jeppe; Madsen Aagaard, Helge; Gaunaa, Mac
2007-01-01
a constant load was assumed. The rotor design was obtained using an Actuator Disc model and was subsequently verified using both a free wake Lifting Line method and a full 3D Navier-Stokes solver. Excellent agreement was obtained using the three models. Global mechanical power coefficient, CP, reached...... a value of slightly above 0.51, while global thrust coefficient, CT, was 0.87. The local power coefficient, Cp, increased to slightly above the Betz limit on the inner part of the rotor as well as the local thrust coefficient, Ct, increased to a value above 1.1. This agrees well with the theory of de...
Energy Technology Data Exchange (ETDEWEB)
Fischle, G.; Stoll, U.; Hinrichs, W.
2002-05-01
Sensotronic Brake Control (SBC) celebrated its world premiere when it was introduced into standard production along with the new SL in October 2001. This innovative brake system is also fitted as standard in the new E-Class. The design of the system components is identical to those used in the SL-Class. The software control parameters have been adapted to the conditions in the new saloon. (orig.) [German] Die Sensotronic Brake Control (SBC) wurde als Weltneuheit mit dem neuen SL im Oktober 2001 in Serie gebracht. Dieses innovative Bremssystem gehoert ebenfalls zur Serienausstattung der neuen E-Klasse. Die Systemkomponenten sind baugleich mit denen der SL-Klasse. Die Regelparameter der Software sind an die Verhaeltnisse der Limousine angepasst. (orig.)
National Security Strategy and the Munitions' Paradox: Self-Sufficiency or Maximum Efficiency
National Research Council Canada - National Science Library
McChesney, Michael
1998-01-01
... that the United States military strategy may not be credible to likely regional aggressors. Conversely, DoD acquisition leadership believes industry consolidation should continue and the munitions base should be expanded to include US allies...