WorldWideScience

Sample records for maximum efficiency implications

  1. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  2. Efficiency of autonomous soft nanomachines at maximum power.

    Science.gov (United States)

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  3. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  4. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  5. Design of a wind turbine rotor for maximum aerodynamic efficiency

    DEFF Research Database (Denmark)

    Johansen, Jeppe; Aagaard Madsen, Helge; Gaunaa, Mac

    2009-01-01

    The design of a three-bladed wind turbine rotor is described, where the main focus has been highest possible mechanical power coefficient, CP, at a single operational condition. Structural, as well as off-design, issues are not considered, leading to a purely theoretical design for investigating...... maximum aerodynamic efficiency. The rotor is designed assuming constant induction for most of the blade span, but near the tip region, a constant load is assumed instead. The rotor design is obtained using an actuator disc model, and is subsequently verified using both a free-wake lifting line method...

  6. An Efficient Algorithm for the Maximum Distance Problem

    Directory of Open Access Journals (Sweden)

    Gabrielle Assunta Grün

    2001-12-01

    Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.

  7. Emf, maximum power and efficiency of fuel cells

    International Nuclear Information System (INIS)

    Gaggioli, R.A.; Dunbar, W.R.

    1990-01-01

    This paper discusses the ideal voltage of steady-flow fuel cells usually expressed by Emf = -ΔG/nF where ΔG is the Gibbs free energy of reaction for the oxidation of the fuel at the supposed temperature of operation of the cell. Furthermore, the ideal power of the cell is expressed as the product of the fuel flow rate with this emf, and the efficiency of a real fuel cell, sometimes called the Gibbs efficiency, is defined as the ratio of the actual power output to this ideal power. Such viewpoints are flawed in several respects. While it is true that if a cell operates isothermally the maximum conceivable work output is equal to the difference between the Gibbs free energy of the incoming reactants and that of the leaving products, nevertheless, even if the cell operates isothermally, the use of the conventional ΔG of reaction assumes that the products of reaction leave separately from one another (and from any unused fuel), and when ΔS of reaction is positive it assumes that a free heat source exists at the operating temperature, whereas if ΔS is negative it neglects the potential power which theoretically could be obtained form the heat released during oxidation. Moreover, the usual cell does not operate isothermally but (virtually) adiabatically

  8. Maximum herd efficiency in meat production II. The influence of ...

    African Journals Online (AJOL)

    surface in terms of plots of total efficiency against percentages of mature body .... Dickerson (1978) shows that, for cattle and sheep, the energy .... protein metabolism. ... metric slope b is a scale-free parameter is convenient and .... Simulation.

  9. Maximum herd efficiency in meat production I. Optima for slaughter ...

    African Journals Online (AJOL)

    Profit rate for a meat production enterprise can be decomposedinto the unit price for meat and herd ... supply and demand, whereas breeding improvement is gen- ... Herd efficiency is total live mass for slaughter divided by costs .... tenance and above-maintenance components by Dickerson, and ..... Growth and productivity.

  10. Continuity and boundary conditions in thermodynamics: From Carnot's efficiency to efficiencies at maximum power

    Science.gov (United States)

    Ouerdane, H.; Apertet, Y.; Goupil, C.; Lecoeur, Ph.

    2015-07-01

    Classical equilibrium thermodynamics is a theory of principles, which was built from empirical knowledge and debates on the nature and the use of heat as a means to produce motive power. By the beginning of the 20th century, the principles of thermodynamics were summarized into the so-called four laws, which were, as it turns out, definitive negative answers to the doomed quests for perpetual motion machines. As a matter of fact, one result of Sadi Carnot's work was precisely that the heat-to-work conversion process is fundamentally limited; as such, it is considered as a first version of the second law of thermodynamics. Although it was derived from Carnot's unrealistic model, the upper bound on the thermodynamic conversion efficiency, known as the Carnot efficiency, became a paradigm as the next target after the failure of the perpetual motion ideal. In the 1950's, Jacques Yvon published a conference paper containing the necessary ingredients for a new class of models, and even a formula, not so different from that of Carnot's efficiency, which later would become the new efficiency reference. Yvon's first analysis of a model of engine producing power, connected to heat source and sink through heat exchangers, went fairly unnoticed for twenty years, until Frank Curzon and Boye Ahlborn published their pedagogical paper about the effect of finite heat transfer on output power limitation and their derivation of the efficiency at maximum power, now mostly known as the Curzon-Ahlborn (CA) efficiency. The notion of finite rate explicitly introduced time in thermodynamics, and its significance cannot be overlooked as shown by the wealth of works devoted to what is now known as finite-time thermodynamics since the end of the 1970's. The favorable comparison of the CA efficiency to actual values led many to consider it as a universal upper bound for real heat engines, but things are not so straightforward that a simple formula may account for a variety of situations. The

  11. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  12. Combustion phasing for maximum efficiency for conventional and high efficiency engines

    International Nuclear Information System (INIS)

    Caton, Jerald A.

    2014-01-01

    Highlights: • Combustion phasing for max efficiency is a function of engine parameters. • Combustion phasing is most affected by heat transfer, compression ratio, burn duration. • Combustion phasing is less affected by speed, load, equivalence ratio and EGR. • Combustion phasing for a high efficiency engine was more advanced. • Exergy destruction during combustion as functions of combustion phasing is reported. - Abstract: The importance of the phasing of the combustion event for internal-combustion engines is well appreciated, but quantitative details are sparse. The objective of the current work was to examine the optimum combustion phasing (based on maximum bmep) as functions of engine design and operating variables. A thermodynamic, engine cycle simulation was used to complete this assessment. As metrics for the combustion phasing, both the crank angle for 50% fuel mass burned (CA 50 ) and the crank angle for peak pressure (CA pp ) are reported as functions of the engine variables. In contrast to common statements in the literature, the optimum CA 50 and CA pp vary depending on the design and operating variables. Optimum, as used in this paper, refers to the combustion timing that provides the maximum bmep and brake thermal efficiency (MBT timing). For this work, the variables with the greatest influence on the optimum CA 50 and CA pp were the heat transfer level, the burn duration and the compression ratio. Other variables such as equivalence ratio, EGR level, engine speed and engine load had a much smaller impact on the optimum CA 50 and CA pp . For the conventional engine, for the conditions examined, the optimum CA 50 varied between about 5 and 11°aTDC, and the optimum CA pp varied between about 9 and 16°aTDC. For a high efficiency engine (high dilution, high compression ratio), the optimum CA 50 was 2.5°aTDC, and the optimum CA pp was 7.8°aTDC. These more advanced values for the optimum CA 50 and CA pp for the high efficiency engine were

  13. Design and optimization of automotive thermoelectric generators for maximum fuel efficiency improvement

    International Nuclear Information System (INIS)

    Kempf, Nicholas; Zhang, Yanliang

    2016-01-01

    Highlights: • A three-dimensional automotive thermoelectric generator (TEG) model is developed. • Heat exchanger design and TEG configuration are optimized for maximum fuel efficiency increase. • Heat exchanger conductivity has a strong influence on maximum fuel efficiency increase. • TEG aspect ratio and fin height increase with heat exchanger thermal conductivity. • A 2.5% fuel efficiency increase is attainable with nanostructured half-Heusler modules. - Abstract: Automotive fuel efficiency can be increased by thermoelectric power generation using exhaust waste heat. A high-temperature thermoelectric generator (TEG) that converts engine exhaust waste heat into electricity is simulated based on a light-duty passenger vehicle with a 4-cylinder gasoline engine. Strategies to optimize TEG configuration and heat exchanger design for maximum fuel efficiency improvement are provided. Through comparison of stainless steel and silicon carbide heat exchangers, it is found that both the optimal TEG design and the maximum fuel efficiency increase are highly dependent on the thermal conductivity of the heat exchanger material. Significantly higher fuel efficiency increase can be obtained using silicon carbide heat exchangers at taller fins and a longer TEG along the exhaust flow direction when compared to stainless steel heat exchangers. Accounting for major parasitic losses, a maximum fuel efficiency increase of 2.5% is achievable using newly developed nanostructured bulk half-Heusler thermoelectric modules.

  14. Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Bo-Hee Choi

    2016-01-01

    Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.

  15. Parametric characteristics of a solar thermophotovoltaic system at the maximum efficiency

    International Nuclear Information System (INIS)

    Liao, Tianjun; Chen, Xiaohang; Yang, Zhimin; Lin, Bihong; Chen, Jincan

    2016-01-01

    Graphical abstract: A model of the far-field TPVC driven by solar energy, which consists of an optical concentrator, an absorber, an emitter, and a PV cell and is simply referred as to the far-field STPVS. - Highlights: • A model of the far-field solar thermophotovoltaic system (STPVS) is established. • External and internal irreversible losses are considered. • The maximum efficiency of the STPVS is calculated. • Optimal values of key parameters at the maximum efficiency are determined. • Effects of the concentrator factor on the performance of the system are discussed. - Abstract: A model of the solar thermophotovoltaic system (STPVS) consisting of an optical concentrator, a thermal absorber, an emitter, and a photovoltaic (PV) cell is proposed, where the far-field thermal emission between the emitter and the PV cell, the radiation losses from the absorber and emitter to the environment, the reflected loss from the absorber, and the finite-rate heat exchange between the PV cell and the environment are taken into account. Analytical expressions for the power output of and overall efficiency of the STPVS are derived. By solving thermal equilibrium equations, the operating temperatures of the emitter and PV cell are determined and the maximum efficiency of the system is calculated numerically for given values of the output voltage of the PV cell and the ratio of the front surface area of the absorber to that of the emitter. For different bandgaps, the maximum efficiencies of the system are calculated and the corresponding optimum values of several operating parameters are obtained. The effects of the concentrator factor on the optimum performance of the system are also discussed.

  16. Energy-Efficient Algorithm for Sensor Networks with Non-Uniform Maximum Transmission Range

    Directory of Open Access Journals (Sweden)

    Yimin Yu

    2011-06-01

    Full Text Available In wireless sensor networks (WSNs, the energy hole problem is a key factor affecting the network lifetime. In a circular multi-hop sensor network (modeled as concentric coronas, the optimal transmission ranges of all coronas can effectively improve network lifetime. In this paper, we investigate WSNs with non-uniform maximum transmission ranges, where sensor nodes deployed in different regions may differ in their maximum transmission range. Then, we propose an Energy-efficient algorithm for Non-uniform Maximum Transmission range (ENMT, which can search approximate optimal transmission ranges of all coronas in order to prolong network lifetime. Furthermore, the simulation results indicate that ENMT performs better than other algorithms.

  17. Efficiency improvement of the maximum power point tracking for PV systems using support vector machine technique

    International Nuclear Information System (INIS)

    Kareim, Ameer A; Mansor, Muhamad Bin

    2013-01-01

    The aim of this paper is to improve efficiency of maximum power point tracking (MPPT) for PV systems. The Support Vector Machine (SVM) was proposed to achieve the MPPT controller. The theoretical, the perturbation and observation (P and O), and incremental conductance (IC) algorithms were used to compare with proposed SVM algorithm. MATLAB models for PV module, theoretical, SVM, P and O, and IC algorithms are implemented. The improved MPPT uses the SVM method to predict the optimum voltage of the PV system in order to extract the maximum power point (MPP). The SVM technique used two inputs which are solar radiation and ambient temperature of the modeled PV module. The results show that the proposed SVM technique has less Root Mean Square Error (RMSE) and higher efficiency than P and O and IC methods.

  18. Toward Improved Rotor-Only Axial Fans—Part II: Design Optimization for Maximum Efficiency

    DEFF Research Database (Denmark)

    Sørensen, Dan Nørtoft; Thompson, M. C.; Sørensen, Jens Nørkær

    2000-01-01

    Numerical design optimization of the aerodynamic performance of axial fans is carried out, maximizing the efficiency in a designinterval of flow rates. Tip radius, number of blades, and angular velocity of the rotor are fixed, whereas the hub radius andspanwise distributions of chord length......, stagger angle, and camber angle are varied to find the optimum rotor geometry.Constraints ensure a pressure rise above a specified target and an angle of attack on the blades below stall. The optimizationscheme is used to investigate the dependence of maximum efficiency on the width of the design interval...

  19. Performance characteristics and parametric choices of a solar thermophotovoltaic cell at the maximum efficiency

    International Nuclear Information System (INIS)

    Dong, Qingchun; Liao, Tianjun; Yang, Zhimin; Chen, Xiaohang; Chen, Jincan

    2017-01-01

    Graphical abstract: The overall model of the solar thermophotovoltaic cell (STPVC) composed of an optical lens, an absorber, an emitter, and a photovoltaic (PV) cell with an integrated back-side reflector is updated to include various irreversible losses. - Highlights: • A new model of the irreversible solar thermophotovoltaic system is proposed. • The material and structure parameters of the system are considered. • The performance characteristics at the maximum efficiency are revealed. • The optimal values of key parameters are determined. • The system can obtain a large efficiency under a relative low concentration ratio. - Abstract: The overall model of the solar thermophotovoltaic cell (STPVC) composed of an optical lens, an absorber, an emitter, and a photovoltaic (PV) cell with an integrated back-side reflector is updated to include various irreversible losses. The power output and efficiency of the cell are analytically derived. The performance characteristics of the STPVC at the maximum efficiency are revealed. The optimum values of several important parameters, such as the voltage output of the PV cell, the area ratio of the absorber to the emitter, and the band-gap of the semiconductor material, are determined. It is found that under the condition of a relative low concentration ratio, the optimally designed STPVC can obtain a relative large efficiency.

  20. Novel high efficient speed sensorless controller for maximum power extraction from wind energy conversion systems

    International Nuclear Information System (INIS)

    Fathabadi, Hassan

    2016-01-01

    Highlights: • Novel sensorless MPPT technique without drawbacks of other sensor/sensorless methods. • Tracking the actual MPP of WECSs, no tracking the MPP of their wind turbines. • Actually extracting the highest output power from WECSs. • Novel MPPT technique having the MPPT efficiency more than 98.5% for WECSs. • Novel MPPT technique having short convergence time for WECSs. - Abstract: In this study, a novel high accurate sensorless maximum power point tracking (MPPT) method is proposed. The technique tracks the actual maximum power point of a wind energy conversion system (WECS) at which maximum output power is extracted from the system, not the maximum power point of its wind turbine at which maximum mechanical power is obtained from the turbine, so it actually extracts the highest output power from the system. The technique only uses input voltage and current of the converter used in the system, and neither needs any speed sensors (anemometer and tachometer) nor has the drawbacks of other sensor/sensorless based MPPT methods. The technique has been implemented as a MPPT controller by constructing a WECS. Theoretical results, the technique performance, and its advantages are validated by presenting real experimental results. The real static-dynamic response of the MPPT controller is experimentally obtained that verifies the proposed MPPT technique high accurately extracts the highest instant power from wind energy conversion systems with the MPPT efficiency of more than 98.5% and a short convergence time that is only 25 s for the constructed system having a total inertia and friction coefficient of 3.93 kg m 2 and 0.014 N m s, respectively.

  1. Relations between the efficiency, power and dissipation for linear irreversible heat engine at maximum trade-off figure of merit

    Science.gov (United States)

    Iyyappan, I.; Ponmurugan, M.

    2018-03-01

    A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \

  2. Maximum Efficiency per Torque Control of Permanent-Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Qingbo Guo

    2016-12-01

    Full Text Available High-efficiency permanent-magnet synchronous machine (PMSM drive systems need not only optimally designed motors but also efficiency-oriented control strategies. However, the existing control strategies only focus on partial loss optimization. This paper proposes a novel analytic loss model of PMSM in either sine-wave pulse-width modulation (SPWM or space vector pulse width modulation (SVPWM which can take into account both the fundamental loss and harmonic loss. The fundamental loss is divided into fundamental copper loss and fundamental iron loss which is estimated by the average flux density in the stator tooth and yoke. In addition, the harmonic loss is obtained from the Bertotti iron loss formula by the harmonic voltages of the three-phase inverter in either SPWM or SVPWM which are calculated by double Fourier integral analysis. Based on the analytic loss model, this paper proposes a maximum efficiency per torque (MEPT control strategy which can minimize the electromagnetic loss of PMSM in the whole operation range. As the loss model of PMSM is too complicated to obtain the analytical solution of optimal loss, a golden section method is applied to achieve the optimal operation point accurately, which can make PMSM work at maximum efficiency. The optimized results between SPWM and SVPWM show that the MEPT in SVPWM has a better effect on the optimization performance. Both the theory analysis and experiment results show that the MEPT control can significantly improve the efficiency performance of the PMSM in each operation condition with a satisfied dynamic performance.

  3. Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

    DEFF Research Database (Denmark)

    Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl

    2010-01-01

    analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value......Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stations and mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log...

  4. Maximum Neutral Buoyancy Depth of Juvenile Chinook Salmon: Implications for Survival during Hydroturbine Passage

    Energy Technology Data Exchange (ETDEWEB)

    Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.

    2012-03-01

    This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrally buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.

  5. Efficiency of Photovoltaic Maximum Power Point Tracking Controller Based on a Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Ammar Al-Gizi

    2017-07-01

    Full Text Available This paper examines the efficiency of a fuzzy logic control (FLC based maximum power point tracking (MPPT of a photovoltaic (PV system under variable climate conditions and connected load requirements. The PV system including a PV module BP SX150S, buck-boost DC-DC converter, MPPT, and a resistive load is modeled and simulated using Matlab/Simulink package. In order to compare the performance of FLC-based MPPT controller with the conventional perturb and observe (P&O method at different irradiation (G, temperature (T and connected load (RL variations – rising time (tr, recovering time, total average power and MPPT efficiency topics are calculated. The simulation results show that the FLC-based MPPT method can quickly track the maximum power point (MPP of the PV module at the transient state and effectively eliminates the power oscillation around the MPP of the PV module at steady state, hence more average power can be extracted, in comparison with the conventional P&O method.

  6. Efficient Photovoltaic System Maximum Power Point Tracking Using a New Technique

    Directory of Open Access Journals (Sweden)

    Mehdi Seyedmahmoudian

    2016-03-01

    Full Text Available Partial shading is an unavoidable condition which significantly reduces the efficiency and stability of a photovoltaic (PV system. When partial shading occurs the system has multiple-peak output power characteristics. In order to track the global maximum power point (GMPP within an appropriate period a reliable technique is required. Conventional techniques such as hill climbing and perturbation and observation (P&O are inadequate in tracking the GMPP subject to this condition resulting in a dramatic reduction in the efficiency of the PV system. Recent artificial intelligence methods have been proposed, however they have a higher computational cost, slower processing time and increased oscillations which results in further instability at the output of the PV system. This paper proposes a fast and efficient technique based on Radial Movement Optimization (RMO for detecting the GMPP under partial shading conditions. The paper begins with a brief description of the behavior of PV systems under partial shading conditions followed by the introduction of the new RMO-based technique for GMPP tracking. Finally, results are presented to demonstration the performance of the proposed technique under different partial shading conditions. The results are compared with those of the PSO method, one of the most widely used methods in the literature. Four factors, namely convergence speed, efficiency (power loss reduction, stability (oscillation reduction and computational cost, are considered in the comparison with the PSO technique.

  7. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications.

    Science.gov (United States)

    Kleidon, Axel

    2009-06-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  8. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    Science.gov (United States)

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  9. Simulation of maximum light use efficiency for some typical vegetation types in China

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Maximum light use efficiency (εmax) is a key parameter for the estimation of net primary productivity (NPP) derived from remote sensing data. There are still many divergences about its value for each vegetation type. The εmax for some typical vegetation types in China is simulated using a modified least squares function based on NOAA/AVHRR remote sensing data and field-observed NPP data. The vegetation classification accuracy is introduced to the process. The sensitivity analysis of εmax to vegetation classification accuracy is also conducted. The results show that the simulated values of εmax are greater than the value used in CASA model, and less than the values simulated with BIOME-BGC model. This is consistent with some other studies. The relative error of εmax resulting from classification accuracy is -5.5%―8.0%. This indicates that the simulated values of εmax are reliable and stable.

  10. An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.

    Science.gov (United States)

    Yoon, Yourim; Kim, Yong-Hyuk

    2013-10-01

    Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.

  11. The maximum reservoir capacity of soils for persistent organic pollutants: implications for global cycling

    International Nuclear Information System (INIS)

    Dalla Valle, M.; Jurado, E.; Dachs, J.; Sweetman, A.J.; Jones, K.C.

    2005-01-01

    The concept of maximum reservoir capacity (MRC), the ratio of the capacities of the surface soil and of the atmospheric mixed layer (AML) to hold chemical under equilibrium conditions, is applied to selected persistent organic pollutants (POPs) in the surface 'skin' (1 mm) of soils. MRC is calculated as a function of soil organic matter (SOM) content and temperature-dependent K OA and mapped globally for selected PCB congeners (PCB-28; -153; -180) and HCB, to identify regions with a higher tendency to retain POPs. It is shown to vary over many orders of magnitude, between compounds, locations and time (seasonally/diurnally). The MRC approach emphasises the very large capacity of soils as a storage compartment for POPs. The theoretical MRC concept is compared to reality and its implications for the global cycling of POPs are discussed. Sharp gradients in soil MRC can exist in mountainous areas and between the land and ocean. Exchanges between oceans and land masses via the atmosphere is likely to be an important driver to the global cycling of these compounds, and net ocean-land transfers could occur in some areas. - Major global terrestrial sinks/stores for POPs are identified and the significance of gradients between them discussed

  12. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  13. Petroleum production at Maximum Efficient Rate Naval Petroleum Reserve No. 1 (Elk Hills), Kern County, California

    International Nuclear Information System (INIS)

    1993-07-01

    This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government's interest is approximately 78% and CUSA's interest is approximately 22%. The government's interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS)

  14. The tolerance efficiency of Panicum maximum and Helianthus annuus in TNT-contaminated soil and nZVI-contaminated soil.

    Science.gov (United States)

    Jiamjitrpanich, Waraporn; Parkpian, Preeda; Polprasert, Chongrak; Laurent, François; Kosanlavit, Rachain

    2012-01-01

    This study was designed to compare the initial method for phytoremediation involving germination and transplantation. The study was also to determine the tolerance efficiency of Panicum maximum (Purple guinea grass) and Helianthus annuus (Sunflower) in TNT-contaminated soil and nZVI-contaminated soil. It was found that the transplantation of Panicum maximum and Helianthus annuus was more suitable than germination as the initiate method of nano-phytoremediation potting test. The study also showed that Panicum maximum was more tolerance than Helianthus annuus in TNT and nZVI-contaminated soil. Therefore, Panicum maximum in the transplantation method should be selected as a hyperaccumulated plant for nano-phytoremediation potting tests. Maximum tolerance dosage of Panicum maximum to TNT-concentration soil was 320 mg/kg and nZVI-contaminated soil was 1000 mg/kg in the transplantation method.

  15. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  16. Hierarchical Load Tracking Control of a Grid-connected Solid Oxide Fuel Cell for Maximum Electrical Efficiency Operation

    DEFF Research Database (Denmark)

    Li, Yonghui; Wu, Qiuwei; Zhu, Haiyu

    2015-01-01

    efficiency operation obtained at different active power output levels, a hierarchical load tracking control scheme for the grid-connected SOFC was proposed to realize the maximum electrical efficiency operation with the stack temperature bounded. The hierarchical control scheme consists of a fast active...... power control and a slower stack temperature control. The active power control was developed by using a decentralized control method. The efficiency of the proposed hierarchical control scheme was demonstrated by case studies using the benchmark SOFC dynamic model......Based on the benchmark solid oxide fuel cell (SOFC) dynamic model for power system studies and the analysis of the SOFC operating conditions, the nonlinear programming (NLP) optimization method was used to determine the maximum electrical efficiency of the grid-connected SOFC subject...

  17. An efficient implementation of maximum likelihood identification of LTI state-space models by local gradient search

    NARCIS (Netherlands)

    Bergboer, N.H.; Verdult, V.; Verhaegen, M.H.G.

    2002-01-01

    We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting

  18. Theoretical assessment of the maximum power point tracking efficiency of photovoltaic facilities with different converter topologies

    Energy Technology Data Exchange (ETDEWEB)

    Enrique, J.M.; Duran, E.; Andujar, J.M. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain); Sidrach-de-Cardona, M. [Departamento de Fisica Aplicada, II, Universidad de Malaga (Spain)

    2007-01-15

    The operating point of a photovoltaic generator that is connected to a load is determined by the intersection point of its characteristic curves. In general, this point is not the same as the generator's maximum power point. This difference means losses in the system performance. DC/DC converters together with maximum power point tracking systems (MPPT) are used to avoid these losses. Different algorithms have been proposed for maximum power point tracking. Nevertheless, the choice of the configuration of the right converter has not been studied so widely, although this choice, as demonstrated in this work, has an important influence in the optimum performance of the photovoltaic system. In this article, we conduct a study of the three basic topologies of DC/DC converters with resistive load connected to photovoltaic modules. This article demonstrates that there is a limitation in the system's performance according to the type of converter used. Two fundamental conclusions are derived from this study: (1) the buck-boost DC/DC converter topology is the only one which allows the follow-up of the PV module maximum power point regardless of temperature, irradiance and connected load and (2) the connection of a buck-boost DC/DC converter in a photovoltaic facility to the panel output could be a good practice to improve performance. (author)

  19. Multiple regression models for the prediction of the maximum obtainable thermal efficiency of organic Rankine cycles

    DEFF Research Database (Denmark)

    Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit

    2014-01-01

    Much attention is focused on increasing the energy efficiency to decrease fuel costs and CO2 emissions throughout industrial sectors. The ORC (organic Rankine cycle) is a relatively simple but efficient process that can be used for this purpose by converting low and medium temperature waste heat ...

  20. Modern and last glacial maximum snowline in Peru and Bolivia: implications for regional climatic change

    Directory of Open Access Journals (Sweden)

    1995-01-01

    Full Text Available LIMITES DES NEIGES ACTUELLE ET DURANT LE DERNIER MAXIMUM GLACIAIRE AU PÉROU ET EN BOLIVIE : IMPLICATIONS EN TERMES DE CHANGEMENTS CLIMATIQUES RÉGIONAUX. Dans les Andes centrales (5°-23°S, le front actuel des neiges éternelles et celui datant du dernier maximum glaciaire (DMG ou LMG ont été cartographiés par télédétection et par un système utilisant une technologie basée sur des informations géographiques. La configuration générale du front des neiges éternelles du DMG est semblable à la configuration actuelle. Ce front s’élève d’est en ouest suivant des précipitations décroissantes. La limite des neiges éternelles du DMG dans la région s’écarte considérablement des 1 000 m souvent rencontrés dans les zones de basse latitude. Un modèle décrivant l’abaissement du front des neiges éternelles (Kuhn, 1989 a été utilisé afin de déterminer les changements de températures et de précipitations responsables de l’abaissement du front des neiges DMG. L’abaissement du front des neiges éternelles à 800-1 200 m dans la cordillère occidentale durant le DMG s’explique en partie par une augmentation des précipitations. Sur les flancs de la cordillère orientale, l’abaissement du front neigeux supérieur à 1 200 m est ce qui révèle le mieux le refroidissement subi par la région pendant le DMG. Il correspond à une baisse d’environ 5 à 7,5 °C. LÍMITES DE NIEVE ACTUAL Y DURANTE EL ÚLTIMO MÁXIMO GLACIAL EN PERÚ Y EN BOLIVIA: IMPLICACIONES EN TÉRMINOS CLIMÁTICOS REGIONALES. El límite de nieve perenne actual y la correspondiente a la de la última máxima glaciación (UMG de los Andes Centrales (5°-23° S han sido mapeadas utilizando técnicas de sensores remotos y sistema de información geográfica. La configuración general del límite de nieve perenne durante la UMG era similar a la actual, elevándose de este a oeste en respuesta a la disminución de las precipitaciones. La depresión del límite de

  1. Efficiency of European emissions markets: Lessons and implications

    International Nuclear Information System (INIS)

    Krishnamurti, Chandrasekhar; Hoque, Ariful

    2011-01-01

    While prior studies have shown that emission rights and futures contracts on emission rights are efficiently priced, there are no studies on the efficiency of the options market. Therefore, this study fills the gap. We examine empirical evidence regarding the efficiency of the options market for emissions rights in Europe. We employ the put-call parity approach to test the efficiency of options on emission rights traded in the European market. This implies that firms can trade options on emission rights in addition to other existing strategies in order to manage their greenhouse gas emissions. - Highlights: → Efficiency of the European options market for emissions. → Design implications for the development of emissions trading schemes in other countries. → Governance issues pertaining to emissions trading.

  2. Search for the maximum efficiency of a ribbed-surfaces device, providing a tight seal

    International Nuclear Information System (INIS)

    Boutin, Jeanne.

    1977-04-01

    The purpose of this experiment was to determine the geometrical characteristics of ribbed surfaces used to equip devices in translation or slow rotation motion and having to form an acceptable seal between slightly viscous fluids. It systematically studies the pressure loss coefficient lambda in function of the different parameters setting the form of ribs and their relative position on the opposite sides. It shows that the passages with two ribbed surfaces lead to highly better results than those with only one, the maximum value of lambda, equal to 0.5, being obtained with the ratios: pitch/clearance = 5, depth of groove/clearance = 1,2, and with their teeth face to face on the two opposite ribbed surfaces. With certain shapes, alternate position of ribs can lead to the maximum of lambda yet lower than 0.5 [fr

  3. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik

    2016-01-01

    This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...

  4. Tracking the maximum efficiency point for the FC system based on extremum seeking scheme to control the air flow

    International Nuclear Information System (INIS)

    Bizon, Nicu

    2014-01-01

    Highlights: • The Maximum Efficiency Point (MEP) is tracked based on air flow rate. • The proposed Extremum Seeking (ES) control assures high performances. • About 10 kW/s search speed and 99.99% stationary accuracy can be obtained. • The energy efficiency increases with 3–12%, according to the power losses. • The control strategy is robust based on self-optimizing ES scheme proposed. - Abstract: An advanced control of the air compressor for the Proton Exchange Membrane Fuel Cell (PEMFC) system is proposed in this paper based on Extremum Seeking (ES) control scheme. The FC net power is mainly depended on the air and hydrogen flow rate and pressure, and heat and water management. This paper proposes to compute the optimal value for the air flow rate based on the advanced ES control scheme in order to maximize the FC net power. In this way, the Maximum Efficiency Point (MEP) will be tracked in real time, with about 10 kW/s search speed and a stationary accuracy of 0.99. Thus, energy efficiency will be close to the maximum value that can be obtained for a given PEMFC stack and compressor group under dynamic load. It is shown that the MEP tracking allows an increasing of the FC net power with 3–12%, depending on the percentage of the FC power supplied to the compressor and the level of the load power. Simulations shows that the performances mentioned above are effective

  5. Hierarchical Load Tracking Control of a Grid-Connected Solid Oxide Fuel Cell for Maximum Electrical Efficiency Operation

    Directory of Open Access Journals (Sweden)

    Yonghui Li

    2015-03-01

    Full Text Available Based on the benchmark solid oxide fuel cell (SOFC dynamic model for power system studies and the analysis of the SOFC operating conditions, the nonlinear programming (NLP optimization method was used to determine the maximum electrical efficiency of the grid-connected SOFC subject to the constraints of fuel utilization factor, stack temperature and output active power. The optimal operating conditions of the grid-connected SOFC were obtained by solving the NLP problem considering the power consumed by the air compressor. With the optimal operating conditions of the SOFC for the maximum efficiency operation obtained at different active power output levels, a hierarchical load tracking control scheme for the grid-connected SOFC was proposed to realize the maximum electrical efficiency operation with the stack temperature bounded. The hierarchical control scheme consists of a fast active power control and a slower stack temperature control. The active power control was developed by using a decentralized control method. The efficiency of the proposed hierarchical control scheme was demonstrated by case studies using the benchmark SOFC dynamic model.

  6. Process configuration of Liquid-nitrogen Energy Storage System (LESS) for maximum turnaround efficiency

    Science.gov (United States)

    Dutta, Rohan; Ghosh, Parthasarathi; Chowdhury, Kanchan

    2017-12-01

    Diverse power generation sector requires energy storage due to penetration of variable renewable energy sources and use of CO2 capture plants with fossil fuel based power plants. Cryogenic energy storage being large-scale, decoupled system with capability of producing large power in the range of MWs is one of the options. The drawback of these systems is low turnaround efficiencies due to liquefaction processes being highly energy intensive. In this paper, the scopes of improving the turnaround efficiency of such a plant based on liquid Nitrogen were identified and some of them were addressed. A method using multiple stages of reheat and expansion was proposed for improved turnaround efficiency from 22% to 47% using four such stages in the cycle. The novelty here is the application of reheating in a cryogenic system and utilization of waste heat for that purpose. Based on the study, process conditions for a laboratory-scale setup were determined and presented here.

  7. INVESTIGATION OF VEHICLE WHEEL ROLLING WITH MAXIMUM EFFICIENCY IN THE BRAKE MODE

    Directory of Open Access Journals (Sweden)

    D. Leontev

    2011-01-01

    Full Text Available Up-to-date vehicles are equipped by various systems of braking effort automatic control theparameters calculation of which do not as a rule have a rational solution. In order to increase theworking efficiency of such systems it is necessary to have the data concerning the impact of variousoperational factors on processes occurring at braking of the object of adjustment (vehicle wheel.Data availability concerning the impact of operational factors allows to decrease geometricalparameters of adjustment devices (modulators and maintain their efficient operation under variousexploitation conditions of vehicle’s motion.

  8. Maximum efficiency of wind turbine rotors using Joukowsky and Betz approaches

    DEFF Research Database (Denmark)

    Okulov, Valery; Sørensen, Jens Nørkær

    2010-01-01

    On the basis of the concepts outlined by Joukowsky nearly a century ago, an analytical aerodynamic optimization model is developed for rotors with a finite number of blades and constant circulation distribution. In the paper, we show the basics of the new model and compare its efficiency with res......On the basis of the concepts outlined by Joukowsky nearly a century ago, an analytical aerodynamic optimization model is developed for rotors with a finite number of blades and constant circulation distribution. In the paper, we show the basics of the new model and compare its efficiency...

  9. Making Conditional Cash Transfer Programs More Efficient : Designing for Maximum Effect of the Conditionality

    OpenAIRE

    de Janvry, Alain; Sadoulet, Elisabeth

    2006-01-01

    Conditional cash transfer programs are now used extensively to encourage poor parents to increase investments in their children's human capital. These programs can be large and expensive, motivating a quest for greater efficiency through increased impact of the programs' imposed conditions on human capital formation. This requires designing the programs' targeting and calibration rules spe...

  10. Modeling and operation optimization of a proton exchange membrane fuel cell system for maximum efficiency

    International Nuclear Information System (INIS)

    Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock

    2016-01-01

    Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.

  11. Maximum Efficiency of Thermoelectric Heat Conversion in High-Temperature Power Devices

    Directory of Open Access Journals (Sweden)

    V. I. Khvesyuk

    2016-01-01

    Full Text Available Modern trends in development of aircraft engineering go with development of vehicles of the fifth generation. The features of aircrafts of the fifth generation are motivation to use new high-performance systems of onboard power supply. The operating temperature of the outer walls of engines is of 800–1000 K. This corresponds to radiation heat flux of 10 kW/m2 . The thermal energy including radiation of the engine wall may potentially be converted into electricity. The main objective of this paper is to analyze if it is possible to use a high efficiency thermoelectric conversion of heat into electricity. The paper considers issues such as working processes, choice of materials, and optimization of thermoelectric conversion. It presents the analysis results of operating conditions of thermoelectric generator (TEG used in advanced hightemperature power devices. A high-temperature heat source is a favorable factor for the thermoelectric conversion of heat. It is shown that for existing thermoelectric materials a theoretical conversion efficiency can reach the level of 15–20% at temperatures up to 1500 K and available values of Ioffe parameter being ZT = 2–3 (Z is figure of merit, T is temperature. To ensure temperature regime and high efficiency thermoelectric conversion simultaneously it is necessary to have a certain match between TEG power, temperature of hot and cold surfaces, and heat transfer coefficient of the cooling system. The paper discusses a concept of radiation absorber on the TEG hot surface. The analysis has demonstrated a number of potentialities for highly efficient conversion through using the TEG in high-temperature power devices. This work has been implemented under support of the Ministry of Education and Science of the Russian Federation; project No. 1145 (the programme “Organization of Research Engineering Activities”.

  12. Chronological trends in maximum and minimum water flows of the Teesta River, Bangladesh, and its implications

    Directory of Open Access Journals (Sweden)

    Md. Sanaul H. Mondal

    2017-03-01

    Full Text Available Bangladesh shares a common border with India in the west, north and east and with Myanmar in the southeast. These borders cut across 57 rivers that discharge through Bangladesh into the Bay of Bengal in the south. The upstream courses of these rivers traverse India, China, Nepal and Bhutan. Transboundary flows are the important sources of water resources in Bangladesh. Among the 57 transboundary rivers, the Teesta is the fourth major river in Bangladesh after the Ganges, the Brahmaputra and the Meghna and Bangladesh occupies about 2071 km2 . The Teesta River floodplain in Bangladesh accounts for 14% of the total cropped area and 9.15 million people of the country. The objective of this study was to investigate trends in both maximum and minimum water flow at Kaunia and Dalia stations for the Teesta River and the coping strategies developed by the communities to adjust with uncertain flood situations. The flow characteristics of the Teesta were analysed by calculating monthly maximum and minimum water levels and discharges from 1985 to 2006. Discharge of the Teesta over the last 22 years has been decreasing. Extreme low-flow conditions were likely to occur more frequently after the implementation of the Gozoldoba Barrage by India. However, a very sharp decrease in peak flows was also observed albeit unexpected high discharge in 1988, 1989, 1991, 1997, 1999 and 2004 with some in between April and October. Onrush of water causes frequent flash floods, whereas decreasing flow leaves the areas dependent on the Teesta vulnerable to droughts. Both these extreme situations had a negative impact on the lives and livelihoods of people dependent on the Teesta. Over the years, people have developed several risk mitigation strategies to adjust with both natural and anthropogenic flood situations. This article proposed the concept of ‘MAXIN (maximum and minimum flows’ for river water justice for riparian land.

  13. Quantum Coherent Three-Terminal Thermoelectrics: Maximum Efficiency at Given Power Output

    Directory of Open Access Journals (Sweden)

    Robert S. Whitney

    2016-05-01

    Full Text Available This work considers the nonlinear scattering theory for three-terminal thermoelectric devices used for power generation or refrigeration. Such systems are quantum phase-coherent versions of a thermocouple, and the theory applies to systems in which interactions can be treated at a mean-field level. It considers an arbitrary three-terminal system in any external magnetic field, including systems with broken time-reversal symmetry, such as chiral thermoelectrics, as well as systems in which the magnetic field plays no role. It is shown that the upper bound on efficiency at given power output is of quantum origin and is stricter than Carnot’s bound. The bound is exactly the same as previously found for two-terminal devices and can be achieved by three-terminal systems with or without broken time-reversal symmetry, i.e., chiral and non-chiral thermoelectrics.

  14. Electron spin resonance and its implication on the maximum nuclear polarization of deuterated solid target materials

    International Nuclear Information System (INIS)

    Heckmann, J.; Meyer, W.; Radtke, E.; Reicherz, G.; Goertz, S.

    2006-01-01

    ESR spectroscopy is an important tool in polarized solid target material research, since it allows us to study the paramagnetic centers, which are used for the dynamic nuclear polarization (DNP). The polarization behavior of the different target materials is strongly affected by the properties of these centers, which are added to the diamagnetic materials by chemical doping or irradiation. In particular, the ESR linewidth of the paramagnetic centers is a very important parameter, especially concerning the deuterated target materials. In this paper, the results of the first precise ESR measurements of the deuterated target materials at a DNP-relevant magnetic field of 2.5 T are presented. Moreover, these results allowed us to experimentally study the correlation between ESR linewidth and maximum deuteron polarization, as given by the spin-temperature theory

  15. Optimizing WiMAX: Mitigating Co-Channel Interference for Maximum Spectral Efficiency

    International Nuclear Information System (INIS)

    Ansari, A.Q.; Memon, A.L.; Qureshi, I.A.

    2016-01-01

    The efficient use of radio spectrum is one of the most important issues in wireless networks because spectrum is generally limited and wireless environment is constrained to channel interference. To cope up and for increased usefulness of radio spectrum wireless networks use frequency reuse technique. The frequency reuse technique allows the use of same frequency band in different cells of same network considering inter-cell distance and resulting interference level. WiMAX (Worldwide Interoperability for Microwave Access) PHY profile is designed to use FRF (Frequency Reuse Factor) of one. When FRF of one is used it results in an improved spectral efficacy but also results in CCI (Co-Channel interference) at cell boundaries. The effect of interference is always required to be measured so that some averaging/ minimization techniques may be incorporated to keep the interference level up to some acceptable threshold in wireless environment. In this paper, we have analyzed, that how effectively CCI impact can be mitigated by using different subcarrier permutation types presented in IEEE 802.16 standard. A simulation based analysis is presented wherein impact of using same and different permutation base in adjacent cells in a WiMAX network on CCI, under varying load conditions is analyzed. We have further studied the effect of permutation base in environment where frequency reuse technique is used in conjunction with cell sectoring for better utilization of radio spectrum. (author)

  16. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  17. Arctic Dinoflagellate Migration Marks the Oligocene Glacial Maximum: Implications for the Rupelian-Chattian Boundary

    Science.gov (United States)

    van Simaeys, S.; Brinkhuis, H.; Pross, J.; Williams, G. L.; Zachos, J. C.

    2004-12-01

    Various geochemical and biotic climate proxies, and notably deep-sea benthic foraminiferal δ 18O records indicate that the Eocene 'greenhouse' state of the Earth gradually evolved towards an earliest Oligocene 'icehouse' state, eventually triggering the abrupt appearance of large continental ice-sheets on Antarctic at ˜33.3 Ma (Oi-1 event). This, however, was only the first of two major glacial events in the Oligocene. Benthic foraminiferal δ 18O records show a second positive excursion in the mid Oligocene, consistent with a significant ice-sheet expansion and/or cooling at 27.1 Ma (Oi-2b) coincident with magnetosubchron C9n. Here, we report on a mid Oligocene, globally synchronous, Arctic dinoflagellate migration event, calibrated against the upper half of C9n. A sudden appearance, and abundance increases of the Arctic taxon Svalbardella at lower-middle latitudes coincides with the so-called Oi-2b benthic δ 18O event, dated at ˜27.1 Ma. This phenomenon is taken to indicate significant high-latitude surface water cooling, concomitant Antarctic ice-sheet growth, and sea level lowering. The duration of the Svalbardella migrations, and the episode of profound cooling is estimated as ˜500 ka, and is here termed the Oligocene Glacial Maximum (OGM). Our records suggest a close link between the OGM, sea-level fall, and the classic Rupelian-Chattian boundary, magnetostratigraphically dating this boundary as ˜27.1 Ma.

  18. Chewing efficiency and maximum bite force with different attachment systems of implant overdentures: a crossover study.

    Science.gov (United States)

    Elsyad, Moustafa Abdou; Khairallah, Ahmed Samir

    2017-06-01

    This crossover study aimed to evaluate and compare chewing efficiency and maximum bite force (MBF) with resilient telescopic and bar attachment systems of implant overdentures in patients with atrophied mandibles. Ten participants with severely resorbed mandibles and persistent denture problems received new maxillary and mandibular conventional dentures (control, CD). After 3 months of adaptation, two implants were inserted in canine region of the mandible. In a quasi-random method, overdentures were connected to the implants with either bar overdentures (BOD) or resilient telescopic overdentures (TOD) attachment systems. Chewing efficiency in terms of unmixed fraction (UF) was measured using chewing gum (after 5, 10, 20, 30 and 50 strokes), and MBF was measured using a bite force transducer. Measurements were performed 3 months after using each of the following prostheses: CD, BOD and TOD. Chewing efficiency and MBF increased significantly with BOD and TOD compared to CD. As the number of chewing cycles increased, the UF decreased. TOD recorded significant higher chewing efficiency and MBF than BOD. Resilient telescopic attachments are associated with increased chewing efficiency and MBF compared bar attachments when used to retain overdentures to the implants in patients with atrophied mandibles. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. The Red Sea during the Last Glacial Maximum: implications for sea level reconstructions

    Science.gov (United States)

    Gildor, H.; Biton, E.; Peltier, W. R.

    2006-12-01

    The Red Sea (RS) is a semi-enclosed basin connected to the Indian Ocean via a narrow and shallow strait, and surrounded by arid areas which exhibits high sensitivity to atmospheric changes and sea level reduction. We have used the MIT GCM to investigate the changes in the hydrography and circulation in the RS in response to reduced sea level, variability in the Indian monsoons, and changes in atmospheric temperature and humidity that occurred during the Last Glacial Maximum (LGM). The model results show high sensitivity to sea level reduction especially in the salinity field (increasing with the reduction in sea level) together with a mild atmospheric impact. Sea level reduction decreases the stratification, increases subsurface temperatures, and alters the circulation pattern at the Strait of Bab el Mandab, which experiences a transition from submaximal flow to maximal flow. The reduction in sea level at LGM alters the location of deep water formation which shifts to an open sea convective site in the northern part of the RS compared to present day situation in which deep water is formed from the Gulf of Suez outflow. Our main result based on both the GCM and on a simple hydraulic control model which takes into account mixing process at the Strait of Bab El Mandeb, is that sea level was reduced by only ~100 m in the Bab El Mandeb region during the LGM, i.e. the water depth at the Hanish sill (the shallowest part in the Strait Bab el Mandab) was around 34 m. This result agrees with the recent reconstruction of the LGM low stand of the sea in this region based upon the ICE-5G (VM2) model of Peltier (2004).

  20. Chemistry of the Marlboro Clay in Virginia and Implications for the Paleocene-Eocene Thermal Maximum

    Science.gov (United States)

    Zimmer, M.; Cai, Y.; Corley, A.; Liang, J. A.; Powars, D.; Goldstein, S. L.; Kent, D. V.; Broecker, W. S.

    2017-12-01

    The Paleocene-Eocene Thermal Maximum (PETM) was a global hyperthermal ( 5ºC warming) event marked by a rapid carbon isotope excursion (CIE) of >1‰ in the marine carbonate record (e.g. Zeebe et al. Nature Geoscience 2009). Possible explanations for the CIE include intrusion of a sill complex into organic carbonate (Aarnes et al. J. Geol. Soc. 2015), dissolution of methane hydrates (Thomas et al. Geology 2002), and a comet impact event (Schaller et al. Science 2016). Here we present new data across the PETM from the VirginiaDEQ-USGS Surprise Hill (SH) core, Northumberland Co., VA. We analyzed the Marlboro Clay, a thick, kaolinite-rich clay unit that marks the initiation of the PETM in the mid-Atlantic Coastal Plain of North America, as well as units above and below it. Bulk sediment records a δ13C excursion of approximately -5‰ across the CIE, while benthic foraminifera (Cibicidoides spp.) record a synchronous excursion of approximately -4.5‰. These results are consistent with other records from the New Jersey Coastal Plain (Makarova et al. Paleoceanography 2017). The excursion coincides with an increase in magnetic susceptibility, a decrease in bulk CaCO3 content, and an 2.5‰ decrease of δ18O in both the bulk sediment and benthic foraminifera of the SH core. Pb isotope analyses of the fraction sediments indicate a unique provenance make-up for the Marlboro Clay. The results of the study thus indicate that PETM Marlboro Clay was not generated simply by intensified weathering of the same source area as the underlying Aquia Formation and overlying Nanjemoy Formation. Any hypothesis that aims to explain the mechanism that triggered the PETM must also account for the observed distinct provenance make-up of the Marlboro Clay.

  1. Carbonic Anhydrase: An Efficient Enzyme with Possible Global Implications

    Directory of Open Access Journals (Sweden)

    Christopher D. Boone

    2013-01-01

    Full Text Available As the global atmospheric emissions of carbon dioxide (CO2 and other greenhouse gases continue to grow to record-setting levels, so do the demands for an efficient and inexpensive carbon sequestration system. Concurrently, the first-world dependence on crude oil and natural gas provokes concerns for long-term availability and emphasizes the need for alternative fuel sources. At the forefront of both of these research areas are a family of enzymes known as the carbonic anhydrases (CAs, which reversibly catalyze the hydration of CO2 into bicarbonate. CAs are among the fastest enzymes known, which have a maximum catalytic efficiency approaching the diffusion limit of 108 M−1s−1. As such, CAs are being utilized in various industrial and research settings to help lower CO2 atmospheric emissions and promote biofuel production. This review will highlight some of the recent accomplishments in these areas along with a discussion on their current limitations.

  2. Hydraulic limits on maximum plant transpiration and the emergence of the safety-efficiency trade-off.

    Science.gov (United States)

    Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare

    2013-04-01

    Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  3. Spatio-Temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    Science.gov (United States)

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian; Wu, Jin; Wu, Xiaocui; Gioli, Beniamino; Wohlfahrt, Georg; Cescatti, Alessandro; van der Tol, Christiaan; Zhou, Sha; Gough, Christopher M.; Gentine, Pierre; Zhang, Yongguang; Steinbrecher, Rainer; Ardö, Jonas

    2018-04-01

    Light-use efficiency (LUE), which quantifies the plants' efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production estimation. Here we use satellite-based solar-induced chlorophyll fluorescence as a proxy for photosynthetically active radiation absorbed by chlorophyll (APARchl) and derive an estimation of the fraction of APARchl (fPARchl) from four remotely sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (ɛmaxchl), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPARchl, suggesting the corresponding ɛmaxchl to have less seasonal variation. This spatio-temporal convergence of LUE derived from fPARchl can be used to build simple but robust gross primary production models and to better constrain process-based models.

  4. An overview of the report: Correlation between carcinogenic potency and the maximum tolerated dose: Implications for risk assessment

    International Nuclear Information System (INIS)

    Krewski, D.; Gaylor, D.W.; Soms, A.P.; Szyszkowicz, M.

    1993-01-01

    Current practice in carcinogen bioassay calls for exposure of experimental animals at doses up to and including the maximum tolerated dose (MTD). Such studies have been used to compute measures of carcinogenic potency such as the TD 50 as well as unit risk factors such as q 1 for predicting low-dose risks. Recent studies have indicated that these measures of carcinogenic potency are highly correlated with the MTD. Carcinogenic potency has also been shown to be correlated with indicators of mutagenicity and toxicity. Correlation of the MTDs for rats and mice implies a corresponding correlation in TD 50 values for these two species. The implications of these results for cancer risk assessment are examined in light of the large variation in potency among chemicals known to induce tumors in rodents. 119 refs., 2 figs., 4 tabs

  5. Achieving energy efficiency in restructured markets, implications for IRP

    International Nuclear Information System (INIS)

    Giraldo, J.M.M.

    1997-01-01

    The shift of the focus of the vertically integrated model to the new unbundled models of organization to attain a ''new competitive'' Electricity Supply Industry brings new subjects into IRP (Integrated Resource Planning) study. The decision centralization on new capacity additions is being substituted by market mechanisms, changing the implementation of the IRP regarding both the construction of new capacity and the DSM. The implication for DSM is that, without a central planning process, it will not be possible to carry out IRP. In some countries they are implementing policies tending to create a more favourable attitude towards energy efficiency, in products and services. The conventional programmes of Demand-Side Management are being redesigned as added-value services to conserve or create market quota through Market Transformation mechanisms, such as Technology Procurement. (author)

  6. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    Science.gov (United States)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  7. Maximum Exergetic Efficiency Operation of a Solar Powered H2O-LiBr Absorption Cooling System

    Directory of Open Access Journals (Sweden)

    Camelia Stanciu

    2017-12-01

    Full Text Available A solar driven cooling system consisting of a single effect H2O-LiBr absorbtion cooling module (ACS, a parabolic trough collector (PTC, and a storage tank (ST module is analyzed during one full day operation. The pressurized water is used to transfer heat from PTC to ST and to feed the ACS desorber. The system is constrained to operate at the maximum ACS exergetic efficiency, under a time dependent cooling load computed on 15 July for a one storey house located near Bucharest, Romania. To set up the solar assembly, two commercial PTCs were selected, namely PT1-IST and PTC 1800 Solitem, and a single unit ST was initially considered. The mathematical model, relying on the energy balance equations, was coded under Engineering Equation Solver (EES environment. The solar data were obtained from the Meteonorm database. The numerical simulations proved that the system cannot cover the imposed cooling load all day long, due to the large variation of water temperature inside the ST. By splitting the ST into two units, the results revealed that the PT1-IST collector only drives the ACS between 9 am and 4:30 pm, while the PTC 1800 one covers the entire cooling period (9 am–6 pm for optimum ST capacities of 90 kg/90 kg and 90 kg/140 kg, respectively.

  8. Metabolic expenditures of lunge feeding rorquals across scale: implications for the evolution of filter feeding and the limits to maximum body size.

    Directory of Open Access Journals (Sweden)

    Jean Potvin

    Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting

  9. Spatial-temporal changes of maximum and minimum temperatures in the Wei River Basin, China: Changing patterns, causes and implications

    Science.gov (United States)

    Liu, Saiyan; Huang, Shengzhi; Xie, Yangyang; Huang, Qiang; Leng, Guoyong; Hou, Beibei; Zhang, Ying; Wei, Xiu

    2018-05-01

    Due to the important role of temperature in the global climate system and energy cycles, it is important to investigate the spatial-temporal change patterns, causes and implications of annual maximum (Tmax) and minimum (Tmin) temperatures. In this study, the Cloud model were adopted to fully and accurately analyze the changing patterns of annual Tmax and Tmin from 1958 to 2008 by quantifying their mean, uniformity, and stability in the Wei River Basin (WRB), a typical arid and semi-arid region in China. Additionally, the cross wavelet analysis was applied to explore the correlations among annual Tmax and Tmin and the yearly sunspots number, Arctic Oscillation, Pacific Decadal Oscillation, and soil moisture with an aim to determine possible causes of annual Tmax and Tmin variations. Furthermore, temperature-related impacts on vegetation cover and precipitation extremes were also examined. Results indicated that: (1) the WRB is characterized by increasing trends in annual Tmax and Tmin, with a more evident increasing trend in annual Tmin, which has a higher dispersion degree and is less uniform and stable than annual Tmax; (2) the asymmetric variations of Tmax and Tmin can be generally explained by the stronger effects of solar activity (primarily), large-scale atmospheric circulation patterns, and soil moisture on annual Tmin than on annual Tmax; and (3) increasing annual Tmax and Tmin have exerted strong influences on local precipitation extremes, in terms of their duration, intensity, and frequency in the WRB. This study presents new analyses of Tmax and Tmin in the WRB, and the findings may help guide regional agricultural production and water resources management.

  10. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    Science.gov (United States)

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  11. Universal Expression of Efficiency at Maximum Power: A Quantum-Mechanical Brayton Engine Working with a Single Particle Confined in a Power-Law Trap

    International Nuclear Information System (INIS)

    Ye Zhuo-Lin; Li Wei-Sheng; Lai Yi-Ming; He Ji-Zhou; Wang Jian-Hui

    2015-01-01

    We propose a quantum-mechanical Brayton engine model that works between two superposed states, employing a single particle confined in an arbitrary power-law trap as the working substance. Applying the superposition principle, we obtain the explicit expressions of the power and efficiency, and find that the efficiency at maximum power is bounded from above by the function: η_+ = θ/(θ + 1), with θ being a potential-dependent exponent. (paper)

  12. Highly efficient maximum power point tracking using DC-DC coupled inductor single-ended primary inductance converter for photovoltaic power systems

    Science.gov (United States)

    Quamruzzaman, M.; Mohammad, Nur; Matin, M. A.; Alam, M. R.

    2016-10-01

    Solar photovoltaics (PVs) have nonlinear voltage-current characteristics, with a distinct maximum power point (MPP) depending on factors such as solar irradiance and operating temperature. To extract maximum power from the PV array at any environmental condition, DC-DC converters are usually used as MPP trackers. This paper presents the performance analysis of a coupled inductor single-ended primary inductance converter for maximum power point tracking (MPPT) in a PV system. A detailed model of the system has been designed and developed in MATLAB/Simulink. The performance evaluation has been conducted on the basis of stability, current ripple reduction and efficiency at different operating conditions. Simulation results show considerable ripple reduction in the input and output currents of the converter. Both the MPPT and converter efficiencies are significantly improved. The obtained simulation results validate the effectiveness and suitability of the converter model in MPPT and show reasonable agreement with the theoretical analysis.

  13. Implications of energy efficiency measures in wheat production

    DEFF Research Database (Denmark)

    Meyer-Aurich, Andreas; Ziegler, T.; Scholz, L.

    The economic and environmental effect of energy saving measures were analyzed for a typical wheat production system in Germany. The introduction of precision farming, reduced nitrogen fertilization and improved crop drying technologies proved to be efficient measures for enhancing energy efficiency...

  14. Efficiency improvement opportunities in TVs: Implications for market transformation programs

    International Nuclear Information System (INIS)

    Park, Won Young; Phadke, Amol; Shah, Nihar; Letschert, Virginie

    2013-01-01

    Televisions (TVs) account for a significant portion of residential electricity consumption and global TV shipments are expected to continue to increase. We assess the market trends in the energy efficiency of TVs that are likely to occur without any additional policy intervention and estimate that TV efficiency will likely improve by over 60% by 2015 with savings potential of 45 terawatt-hours [TW h] per year in 2015, compared to today’s technology. We discuss various energy-efficiency improvement options and evaluate the cost effectiveness of three of them. At least one of these options improves efficiency by at least 20% cost effectively beyond ongoing market trends. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to further capture global energy savings potential from TVs which we estimate to be up to 23 TW h per year in 2015. - Highlights: • We analyze the impact of the recent TV market transition on TV energy consumption. • We review TV technology options that could be realized in the near future. • We assess the cost-effectiveness of selected energy-efficiency improvement options. • We estimate global electricity savings potential in selected scenarios. • We discuss possible directions of market transformation programs

  15. Potentials and policy implications of energy and material efficiency improvement

    Energy Technology Data Exchange (ETDEWEB)

    Worrell, Ernst; Levine, Mark; Price, Lynn; Martin, Nathan; van den Broek, Richard; Block, Kornelis

    1997-01-01

    There is a growing awareness of the serious problems associated with the provision of sufficient energy to meet human needs and to fuel economic growth world-wide. This has pointed to the need for energy and material efficiency, which would reduce air, water and thermal pollution, as well as waste production. Increasing energy and material efficiency also have the benefits of increased employment, improved balance of imports and exports, increased security of energy supply, and adopting environmentally advantageous energy supply. A large potential exists for energy savings through energy and material efficiency improvements. Technologies are not now, nor will they be, in the foreseeable future, the limiting factors with regard to continuing energy efficiency improvements. There are serious barriers to energy efficiency improvement, including unwillingness to invest, lack of available and accessible information, economic disincentives and organizational barriers. A wide range of policy instruments, as well as innovative approaches have been tried in some countries in order to achieve the desired energy efficiency approaches. These include: regulation and guidelines; economic instruments and incentives; voluntary agreements and actions, information, education and training; and research, development and demonstration. An area that requires particular attention is that of improved international co-operation to develop policy instruments and technologies to meet the needs of developing countries. Material efficiency has not received the attention that it deserves. Consequently, there is a dearth of data on the qualities and quantities for final consumption, thus, making it difficult to formulate policies. Available data, however, suggest that there is a large potential for improved use of many materials in industrialized countries.

  16. Effort and the Cycle : Cyclical Implications of Efficiency Wages

    NARCIS (Netherlands)

    Uhlig, H.F.H.V.S.; Xu, Y.

    1996-01-01

    A number of authors have proposed theories of efficiency wages to explain the behaviour of aggregate labor markets. According to these theories, firms do not adjust wages downwards despite available unemployed job seekers, because lower wages would induce hired workers to shirk more often, which in

  17. Quantitative limitations to photosynthesis in K deficient sunflower and their implications on water-use efficiency.

    Science.gov (United States)

    Jákli, Bálint; Tavakol, Ershad; Tränkner, Merle; Senbayram, Mehmet; Dittert, Klaus

    2017-02-01

    Potassium (K) is crucial for crop growth and is strongly related to stress tolerance and water-use efficiency (WUE). A major physiological effect of K deficiency is the inhibition of net CO 2 assimilation (A N ) during photosynthesis. Whether this reduction originates from limitations either to photochemical energy conversion or biochemical CO 2 fixation or from a limitation to CO 2 diffusion through stomata and the leaf mesophyll is debated. In this study, limitations to photosynthetic carbon gain of sunflower (Helianthus annuus L.) under K deficiency and PEG- induced water deficit were quantified and their implications on plant- and leaf-scale WUE (WUE P , WUE L ) were evaluated. Results show that neither maximum quantum use efficiency (F v /F m ) nor in-vivo RubisCo activity were directly affected by K deficiency and that the observed impairment of A N was primarily due to decreased CO 2 mesophyll conductance (g m ). K deficiency additionally impaired leaf area development which, together with reduced A N , resulted in inhibition of plant growth and a reduction of WUE P . Contrastingly, WUE L was not affected by K supply which indicated no inhibition of stomatal control. PEG-stress further impeded A N by stomatal closure and resulted in enhanced WUE L and high oxidative stress. It can be concluded from this study that reduction of g m is a major response of leaves to K deficiency, possibly due to changes in leaf anatomy, which negatively affects A N and contributes to the typical symptoms like oxidative stress, growth inhibition and reduced WUE P . Copyright © 2016 Elsevier GmbH. All rights reserved.

  18. Full on-chip and area-efficient CMOS LDO with zero to maximum load stability using adaptive frequency compensation

    Energy Technology Data Exchange (ETDEWEB)

    Ma Haifeng; Zhou Feng, E-mail: fengzhou@fudan.edu.c [State Key Laboratory of ASIC and System, Fudan University, Shanghai 201203 (China)

    2010-01-15

    A full on-chip and area-efficient low-dropout linear regulator (LDO) is presented. By using the proposed adaptive frequency compensation (AFC) technique, full on-chip integration is achieved without compromising the LDO's stability in the full output current range. Meanwhile, the use of a compact pass transistor (the compact pass transistor serves as the gain fast roll-off output stage in the AFC technique) has enabled the LDO to be very area-efficient. The proposed LDO is implemented in standard 0.35 {mu}m CMOS technology and occupies an active area as small as 220 x 320 {mu}m{sup 2}, which is a reduction to 58% compared to state-of-the-art designs using technologies with the same feature size. Measurement results show that the LDO can deliver 0-60 mA output current with 54 {mu}A quiescent current consumption and the regulated output voltage is 1.8 V with an input voltage range from 2 to 3.3 V. (semiconductor integrated circuits)

  19. Full on-chip and area-efficient CMOS LDO with zero to maximum load stability using adaptive frequency compensation

    International Nuclear Information System (INIS)

    Ma Haifeng; Zhou Feng

    2010-01-01

    A full on-chip and area-efficient low-dropout linear regulator (LDO) is presented. By using the proposed adaptive frequency compensation (AFC) technique, full on-chip integration is achieved without compromising the LDO's stability in the full output current range. Meanwhile, the use of a compact pass transistor (the compact pass transistor serves as the gain fast roll-off output stage in the AFC technique) has enabled the LDO to be very area-efficient. The proposed LDO is implemented in standard 0.35 μm CMOS technology and occupies an active area as small as 220 x 320 μm 2 , which is a reduction to 58% compared to state-of-the-art designs using technologies with the same feature size. Measurement results show that the LDO can deliver 0-60 mA output current with 54 μA quiescent current consumption and the regulated output voltage is 1.8 V with an input voltage range from 2 to 3.3 V. (semiconductor integrated circuits)

  20. Taxing Stock Options: Efficiency, Fairness and Revenue Implications

    Directory of Open Access Journals (Sweden)

    Jack M. Mintz

    2015-10-01

    Full Text Available The federal Liberals and the NDP are right about this much: There is a more sensible way to tax the stock options that are granted as compensation by corporations than the approach the federal government takes now. But both parties are wrong about how much revenue an appropriate change in current tax policy will add to the treasury. Far from the half-billion dollars or more that both parties claim they will raise in federal tax revenue by changing the taxation of stock options, the appropriate reform will virtually raise no revenue. It could actually result in marginally lower tax revenue. As it stands, stock options are treated differently than salary and other forms of cash compensation when it comes to taxing an employee or director, in that they are subject to only half taxation, similar to capital gains. They are also treated differently than cash compensation for the corporation granting the options, in that they cannot be deducted from corporate income tax. The federal NDP and Liberals have both accepted the growing criticism, which only intensified in the aftermath of the 2008 financial crisis, that the lower tax rate is an unfair tax break for those employees who receive stock options. Both parties have proposed to change that, leaving an exemption for startup companies only, with the NDP proposing full personal taxation for all stock options except for start-up companies and the Liberals proposing it for options-based compensation exceeding $100,000. Treating stock options the same as cash compensation would indeed be more tax efficient, reducing the distortionary effect that can influence company compensation packages to give more weight to stock options and less to cash than they might otherwise. But the only way to ensure that efficiency is by treating both the personal tax side of the benefit, and the corporate tax side of the benefit, in the same way as other employee compensation. That is, applying full taxation to the recipient

  1. Measurement of the Maximum Frequency of Electroglottographic Fluctuations in the Expiration Phase of Volitional Cough as a Functional Test for Cough Efficiency.

    Science.gov (United States)

    Iwahashi, Toshihiko; Ogawa, Makoto; Hosokawa, Kiyohito; Kato, Chieri; Inohara, Hidenori

    2017-10-01

    The hypotheses of the present study were that the maximum frequency of fluctuation of electroglottographic (EGG) signals in the expiration phase of volitional cough (VC) reflects the cough efficiency and that this EGG parameter is affected by impaired laryngeal closure, expiratory effort strength, and gender. For 20 normal healthy adults and 20 patients diagnosed with unilateral vocal fold paralysis (UVFP), each participant was fitted with EGG electrodes on the neck, had a transnasal laryngo-fiberscope inserted, and was asked to perform weak/strong VC tasks while EGG signals and a high-speed digital image of the larynx were recorded. The maximum frequency was calculated in the EGG fluctuation region coinciding with vigorous vocal fold vibration in the laryngeal HSDIs. In addition, each participant underwent spirometry for measurement of three aerodynamic parameters, including peak expiratory air flow (PEAF), during weak/strong VC tasks. Significant differences were found for both maximum EGG frequency and PEAF between the healthy and UVFP groups and between the weak and strong VC tasks. Among the three cough aerodynamic parameters, PEAF showed the highest positive correlation with the maximum EGG frequency. The correlation coefficients between the maximum EGG frequency and PEAF recorded simultaneously were 0.574 for the whole group, and 0.782/0.717/0.823/0.688 for the male/female/male-healthy/male-UVFP subgroups, respectively. Consequently, the maximum EGG frequency measured in the expiration phase of VC was shown to reflect the velocity of expiratory airflow to some extent and was suggested to be affected by vocal fold physical properties, glottal closure condition, and the expiratory function.

  2. Sediment source, turbidity maximum, and implications for mud exchange between channel and mangroves in an Amazonian estuary

    Science.gov (United States)

    Asp, Nils Edvin; Gomes, Vando José Costa; Ogston, Andrea; Borges, José Carlos Corrêa; Nittrouer, Charles Albert

    2016-02-01

    The tide-dominated eastern sector of the Brazilian Amazonian coast includes large mangrove areas and several estuaries, including the estuary associated with the Urumajó River. There, the dynamics of suspended sediments and delivery mechanisms for mud to the tidal flats and mangroves are complex and were investigated in this study. Four longitudinal measuring campaigns were carried out, encompassing spring/neap tides and dry/rainy seasons. During spring tides, water levels were measured simultaneously at 5 points along the estuary. Currents, salinity, and suspended sediment concentrations (SSCs) were measured over the tidal cycle in a cross section at the middle sector of the estuary. Results show a marked turbidity maximum zone (TMZ) during the rainy season, with a 4-km upstream displacement from neap to spring tide. During dry season, the TMZ was conspicuous only during neap tide and dislocated about 5 km upstream and was substantially less apparent in comparison to that observed during rainy season. The results show that mud is being concentrated in the channel associated with the TMZ especially during the rainy season. At this time, a substantial amount of the mud is washed out from mangroves to the estuarine channel and hydrodynamic/salinity conditions for TMZ formation are optimal. As expected, transport to the mangrove flats is most effective during spring tide and substantially reduced at neap tide, when mangroves are not being flooded. During the dry season, mud is resuspended from the bed in the TMZ sector and is a source of sediment delivered to the tidal flats and mangroves. The seasonal variation of the sediments on the seabed is in agreement with the variation of suspended sediments as well.

  3. Experimental Determination of Operating and Maximum Power Transfer Efficiencies at Resonant Frequency in a Wireless Power Transfer System using PP Network Topology with Top Coupling

    Science.gov (United States)

    Ramachandran, Hema; Pillai, K. P. P.; Bindu, G. R.

    2017-08-01

    A two-port network model for a wireless power transfer system taking into account the distributed capacitances using PP network topology with top coupling is developed in this work. The operating and maximum power transfer efficiencies are determined analytically in terms of S-parameters. The system performance predicted by the model is verified with an experiment consisting of a high power home light load of 230 V, 100 W and is tested for two forced resonant frequencies namely, 600 kHz and 1.2 MHz. The experimental results are in close agreement with the proposed model.

  4. Efeito do hexazinone isolado e em mistura na eficiência fotossintética de Panicum maximum Effect of hexazinone applied alone and in combination on the photosynthetic efficiency of Panicum maximum

    Directory of Open Access Journals (Sweden)

    M. Girotto

    2012-06-01

    Full Text Available Esta pesquisa teve como objetivo avaliar a velocidade e intensidade de ação do hexazinone isolado e em mistura com outros inibidores do fotossistema II, através da eficiência fotossintética de Panicum maximum em pós-emergência. O ensaio foi constituído de seis tratamentos: hexazinone (250 g ha-1, tebuthiuron (1,0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1 + 0,5 kg ha-1, diuron (2.400 g ha-1, hexazinone + diuron (125 + 1.200 g ha-1, metribuzin (1.440 g ha-1, hexazinone + metribuzin (125 + 720 g ha-1 e uma testemunha. O experimento foi instalado em delineamento inteiramente casualizado, com quatro repetições. Após a aplicação dos tratamentos, as plantas foram transportadas para casa de vegetação sob condições controladas de temperatura e umidade, onde ficaram durante o período experimental, sendo realizadas as seguintes avaliações: taxa de transporte de elétrons e análise visual de intoxicação. A avaliação com o fluorômetro foi realizada nos intervalos de 1, 2, 6, 24, 48, 72, 120 e 168 horas após a aplicação, e as avaliações visuais, aos três e sete dias após a aplicação. Os resultados demonstraram diferença nos tratamentos, enfatizando a aplicação do diuron, que reduziu lentamente o transporte de elétrons comparado com os outros herbicidas e, em mistura com hexazinone, apresentou efeito sinérgico. Verificou-se com o uso do fluorômetro a intoxicação antecipada em plantas de P. maximum após a aplicação de herbicidas inibidores do fotossistema II de forma isolada e em mistura.This work aimed to evaluate the speed and intensity of action of hexazinone applied alone and in combination with other photo-system II inhibitors on the photosynthetic efficiency of Panicum maximum in post-emergence. The assay consisted of six treatments: hexazinone (250 g ha-1, tebuthiuron (1.0 kg ha-1, hexazinone + tebuthiuron (125 g ha-1+ 0.5 kg ha-1, diuron (2,400 g ha-1, hexazinone + diuron (125 + 1,200 g ha-1, metribuzin

  5. A Flexible Maximum Power Point Tracking Control Strategy Considering Both Conversion Efficiency and Power Fluctuation for Large-inertia Wind Turbines

    Directory of Open Access Journals (Sweden)

    Hongmin Meng

    2017-07-01

    Full Text Available In wind turbine control, maximum power point tracking (MPPT control is the main control mode for partial-load regimes. Efficiency potentiation of energy conversion and power smoothing are both two important control objectives in partial-load regime. However, on the one hand, low power fluctuation signifies inefficiency of energy conversion. On the other hand, enhancing efficiency may increase output power fluctuation as well. Thus the two objectives are contradictory and difficult to balance. This paper proposes a flexible MPPT control framework to improve the performance of both conversion efficiency and power smoothing, by adaptively compensating the torque reference value. The compensation was determined by a proposed model predictive control (MPC method with dynamic weights in the cost function, which improved control performance. The computational burden of the MPC solver was reduced by transforming the cost function representation. Theoretical analysis proved the good stability and robustness. Simulation results showed that the proposed method not only kept efficiency at a high level, but also reduced power fluctuations as much as possible. Therefore, the proposed method could improve wind farm profits and power grid reliability.

  6. Improving efficiency of two-type maximum power point tracking methods of tip-speed ratio and optimum torque in wind turbine system using a quantum neural network

    International Nuclear Information System (INIS)

    Ganjefar, Soheil; Ghassemi, Ali Akbar; Ahmadi, Mohamad Mehdi

    2014-01-01

    In this paper, a quantum neural network (QNN) is used as controller in the adaptive control structures to improve efficiency of the maximum power point tracking (MPPT) methods in the wind turbine system. For this purpose, direct and indirect adaptive control structures equipped with QNN are used in tip-speed ratio (TSR) and optimum torque (OT) MPPT methods. The proposed control schemes are evaluated through a battery-charging windmill system equipped with PMSG (permanent magnet synchronous generator) at a random wind speed to demonstrate transcendence of their effectiveness as compared to PID controller and conventional neural network controller (CNNC). - Highlights: • Using a new control method to harvest the maximum power from wind energy system. • Using an adaptive control scheme based on quantum neural network (QNN). • Improving of MPPT-TSR method by direct adaptive control scheme based on QNN. • Improving of MPPT-OT method by indirect adaptive control scheme based on QNN. • Using a windmill system based on PMSG to evaluate proposed control schemes

  7. Mental representation for action in the elderly: implications for movement efficiency and injury risk.

    Science.gov (United States)

    Gabbard, Carl

    2015-04-01

    Recent research findings indicate that with older adulthood, there are functional decrements in spatial cognition and more specially, in the ability to mentally represent and effectively plan motor actions. A typical finding is a significant over- or underestimation of one's actual physical abilities with movement planning-planning that has implications for movement efficiency and physical safety. A practical, daily life example is estimation of reachability--a situation that for the elderly may be linked with fall incidence. A strategy used to mentally represent action is the use of motor imagery--an ability that also declines with advancing older age. This brief review highlights research findings on mental representation and motor imagery in the elderly and addresses the implications for improving movement efficiency and lowering the risk of movement-related injury. © The Author(s) 2013.

  8. Chemical and mechanical efficiencies of molecular motors and implications for motor mechanisms

    International Nuclear Information System (INIS)

    Wang Hongyun

    2005-01-01

    Molecular motors operate in an environment dominated by viscous friction and thermal fluctuations. The chemical reaction in a motor may produce an active force at the reaction site to directly move the motor forward. Alternatively a molecular motor may generate a unidirectional motion by rectifying thermal fluctuations using free energy barriers established in the chemical reaction. The reaction cycle has many occupancy states, each having a different effect on the motor motion. The average effect of the chemical reaction on the motor motion can be characterized by the motor potential profile. The biggest advantage of studying the motor potential profile is that it can be reconstructed from the time series of motor positions measured in single-molecule experiments. In this paper, we use the motor potential profile to express the Stokes efficiency as the product of the chemical efficiency and the mechanical efficiency. We show that both the chemical and mechanical efficiencies are bounded by 100% and, thus, are properly defined efficiencies. We discuss implications of high efficiencies for motor mechanisms: a mechanical efficiency close to 100% implies that the motor potential profile is close to a constant slope; a chemical efficiency close to 100% implies that (i) the chemical transitions are not slower than the mechanical motion and (ii) the equilibrium constant of each chemical transition is close to one

  9. Methodological differences behind energy statistics for steel production – Implications when monitoring energy efficiency

    International Nuclear Information System (INIS)

    Morfeldt, Johannes; Silveira, Semida

    2014-01-01

    Energy efficiency indicators used for evaluating industrial activities at the national level are often based on statistics reported in international databases. In the case of the Swedish iron and steel sector, energy consumption statistics published by Odyssee, Eurostat, the IEA (International Energy Agency), and the United Nations differ, resulting in diverging energy efficiency indicators. For certain years, the specific energy consumption for steel is twice as high if based on Odyssee statistics instead of statistics from the IEA. The analysis revealed that the assumptions behind the allocation of coal and coke used in blast furnaces as energy consumption or energy transformation are the major cause for these differences. Furthermore, the differences are also related to errors in the statistical data resulting from two different surveys that support the data. The allocation of coal and coke has implications when promoting resource as well as energy efficiency at the systems level. Eurostat's definition of energy consumption is more robust compared to the definitions proposed by other organisations. Nevertheless, additional data and improved energy efficiency indicators are needed to fully monitor the iron and steel sector's energy system and promote improvements towards a greener economy at large. - Highlights: • Energy statistics for the iron and steel sector diverge in international databases. • Varying methods have implications when monitoring energy and resource efficiency. • Allocation of blast furnaces as transformation activities is behind the differences. • Different statistical surveys and human error also contribute to diverging results

  10. Productive efficiency of public and private solid waste logistics and its implications for waste management policy

    Directory of Open Access Journals (Sweden)

    Daisuke Ichinose

    2013-03-01

    Full Text Available This paper measures the productive efficiency of municipal solid waste (MSW logistics by applying data envelopment analysis (DEA to cross-sectional data of prefectures in Japan. Either through public operations or by outsourcing to private waste collection operators, prefectural governments possess the fundamental authority over waste processing operations in Japan. Therefore, we estimate a multi-input multi-output production efficiency at the prefectural level via DEA, employing several different model settings. Our data classify the MSW into household solid waste (HSW and business solid waste (BSW collected by both private and public operators as separate outputs, while the numbers of trucks and workers used by private and public operators are used as inputs. The results consistently show that geographical characteristics, such as the number of inhabited remote islands, are relatively more dominant factors for determining inefficiency. While the implication that a minimum efficient scale is not achieved in these small islands is in line with the literature suggesting that waste logistics has increasing returns at the municipal level, our results indicate that waste collection efficiency in Japan is well described by CRS technology at the prefectural level. The results also show that prefectures with higher private-sector participation, measured in terms of HSW collection, are more efficient, whereas a higher private–labor ratio negatively affects efficiency. We also provide evidence that prefectures with inefficient MSW logistics have a higher tendency of suffering from the illegal dumping of industrial waste.

  11. Airborne Hyperspectral Evaluation of Maximum Gross Photosynthesis, Gravimetric Water Content, and CO2 Uptake Efficiency of the Mer Bleue Ombrotrophic Peatland

    Directory of Open Access Journals (Sweden)

    J. Pablo Arroyo-Mora

    2018-04-01

    Full Text Available Peatlands cover a large area in Canada and globally (12% and 3% of the landmass, respectively. These ecosystems play an important role in climate regulation through the sequestration of carbon dioxide from, and the release of methane to, the atmosphere. Monitoring approaches, required to understand the response of peatlands to climate change at large spatial scales, are challenged by their unique vegetation characteristics, intrinsic hydrological complexity, and rapid changes over short periods of time (e.g., seasonality. In this study, we demonstrate the use of multitemporal, high spatial resolution (1 m2 hyperspectral airborne imagery (Compact Airborne Spectrographic Imager (CASI and Shortwave Airborne Spectrographic Imager (SASI sensors for assessing maximum instantaneous gross photosynthesis (PGmax in hummocks, and gravimetric water content (GWC and carbon uptake efficiency in hollows, at the Mer Bleue ombrotrophic bog. We applied empirical models (i.e., in situ data and spectral indices and we derived spatial and temporal trends for the aforementioned variables. Our findings revealed the distribution of hummocks (51.2%, hollows (12.7%, and tree cover (33.6%, which is the first high spatial resolution map of this nature at Mer Bleue. For hummocks, we found growing season PGmax values between 8 μmol m−2 s−1 and 12 μmol m−2 s−1 were predominant (86.3% of the total area. For hollows, our results revealed, for the first time, the spatial heterogeneity and seasonal trends for gravimetric water content and carbon uptake efficiency for the whole bog.

  12. Petroleum production at Maximum Efficient Rate Naval Petroleum Reserve No. 1 (Elk Hills), Kern County, California. Final Supplemental Environmental Impact Statement

    Energy Technology Data Exchange (ETDEWEB)

    1993-07-01

    This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).

  13. Irrigation efficiency and water-policy implications for river basin resilience

    Science.gov (United States)

    Scott, C. A.; Vicuña, S.; Blanco-Gutiérrez, I.; Meza, F.; Varela-Ortega, C.

    2014-04-01

    Rising demand for food, fiber, and biofuels drives expanding irrigation withdrawals from surface water and groundwater. Irrigation efficiency and water savings have become watchwords in response to climate-induced hydrological variability, increasing freshwater demand for other uses including ecosystem water needs, and low economic productivity of irrigation compared to most other uses. We identify three classes of unintended consequences, presented here as paradoxes. Ever-tighter cycling of water has been shown to increase resource use, an example of the efficiency paradox. In the absence of effective policy to constrain irrigated-area expansion using "saved water", efficiency can aggravate scarcity, deteriorate resource quality, and impair river basin resilience through loss of flexibility and redundancy. Water scarcity and salinity effects in the lower reaches of basins (symptomatic of the scale paradox) may partly be offset over the short-term through groundwater pumping or increasing surface water storage capacity. However, declining ecological flows and increasing salinity have important implications for riparian and estuarine ecosystems and for non-irrigation human uses of water including urban supply and energy generation, examples of the sectoral paradox. This paper briefly considers three regional contexts with broadly similar climatic and water-resource conditions - central Chile, southwestern US, and south-central Spain - where irrigation efficiency directly influences basin resilience. The comparison leads to more generic insights on water policy in relation to irrigation efficiency and emerging or overdue needs for environmental protection.

  14. Irrigation efficiency and water-policy implications for river-basin resilience

    Science.gov (United States)

    Scott, C. A.; Vicuña, S.; Blanco-Gutiérrez, I.; Meza, F.; Varela-Ortega, C.

    2013-07-01

    Rising demand for food, fiber, and biofuels drives expanding irrigation withdrawals from surface- and groundwater. Irrigation efficiency and water savings have become watchwords in response to climate-induced hydrological variability, increasing freshwater demand for other uses including ecosystem water needs, and low economic productivity of irrigation compared to most other uses. We identify three classes of unintended consequences, presented here as paradoxes. Ever-tighter cycling of water has been shown to increase resource use, an example of the efficiency paradox. In the absence of effective policy to constrain irrigated-area expansion using "saved water", efficiency can aggravate scarcity, deteriorate resource quality, and impair river-basin resilience through loss of flexibility and redundancy. Water scarcity and salinity effects in the lower reaches of basins (symptomatic of the scale paradox) may partly be offset over the short-term through groundwater pumping or increasing surface water storage capacity. However, declining ecological flows and increasing salinity have important implications for riparian and estuarine ecosystems and for non-irrigation human uses of water including urban supply and energy generation, examples of the sectoral paradox. This paper briefly examines policy frameworks in three regional contexts with broadly similar climatic and water-resource conditions - central Chile, southwestern US, and south-central Spain - where irrigation efficiency directly influences basin resilience. The comparison leads to more generic insights on water policy in relation to irrigation efficiency and emerging or overdue needs for environmental protection.

  15. Regional Inversion of the Maximum Carboxylation Rate (Vcmax) through the Sunlit Light Use Efficiency Estimated Using the Corrected Photochemical Reflectance Ratio Derived from MODIS Data

    Science.gov (United States)

    Zheng, T.; Chen, J. M.

    2016-12-01

    The maximum carboxylation rate (Vcmax), despite its importance in terrestrial carbon cycle modelling, remains challenging to obtain for large scales. In this study, an attempt has been made to invert the Vcmax using the gross primary productivity from sunlit leaves (GPPsun) with the physiological basis that the photosynthesis rate for leaves exposed to high solar radiation is mainly determined by the Vcmax. Since the GPPsun can be calculated through the sunlit light use efficiency (ɛsun), the main focus becomes the acquisition of ɛsun. Previous studies using site level reflectance observations have shown the ability of the photochemical reflectance ratio (PRR, defined as the ratio between the reflectance from an effective band centered around 531nm and a reference band) in tracking the variation of ɛsun for an evergreen coniferous stand and a deciduous broadleaf stand separately and the potential of a NDVI corrected PRR (NPRR, defined as the product of NDVI and PRR) in producing a general expression to describe the NPRR-ɛsun relationship across different plant function types. In this study, a significant correlation (R2 = 0.67, p<0.001) between the MODIS derived NPRR and the site level ɛsun calculated using flux data for four Canadian flux sites has been found for the year 2010. For validation purpose, the ɛsun in 2009 for the same sites are calculated using the MODIS NPRR and the expression from 2010. The MODIS derived ɛsun matches well with the flux calculated ɛsun (R2 = 0.57, p<0.001). Same expression has then been applied over a 217 × 193 km area in Saskatchewan, Canada to obtain the ɛsun and thus GPPsun for the region during the growing season in 2008 (day 150 to day 260). The Vcmax for the region is inverted using the GPPsun and the result is validated at three flux sites inside the area. The results show that the approach is able to obtain good estimations of Vcmax values with R2 = 0.68 and RMSE = 8.8 μmol m-2 s-1.

  16. On the economic analysis of problems in energy efficiency: Market barriers, market failures, and policy implications

    International Nuclear Information System (INIS)

    Sanstad, A.H.; Koomey, J.G.; Levine, M.D.

    1993-01-01

    In his recent paper in The Energy Journal, Ronald Sutherland argues that several so-called ''market barriers'' to energy efficiency frequently cited in the literature are not market failures in the conventional sense and are thus irrelevant for energy policy. We argue that Sutherland has inadequately analyzed the idea of market barrier and misrepresented the policy implications of microeconomics. We find that economic theory, correctly interpreted, does not provide for the categorical dismissal of market barriers. We explore important methodological issues underlying the debate over market barriers, and discuss the importance of reconciling the findings of non-economic social sciences with the economic analysis of energy demand and consumer decision-making. We also scrutinize Sutherland's attempt to apply finance theory to rationalize high implicit discount rates observed in energy-related choices, and find this use of finance theory to be inappropriate

  17. On the economic analysis of problems in energy efficiency: Market barriers, market failures, and policy implications

    Energy Technology Data Exchange (ETDEWEB)

    Sanstad, A.H.; Koomey, J.G.; Levine, M.D.

    1993-01-01

    In his recent paper in The Energy Journal, Ronald Sutherland argues that several so-called ``market barriers`` to energy efficiency frequently cited in the literature are not market failures in the conventional sense and are thus irrelevant for energy policy. We argue that Sutherland has inadequately analyzed the idea of market barrier and misrepresented the policy implications of microeconomics. We find that economic theory, correctly interpreted, does not provide for the categorical dismissal of market barriers. We explore important methodological issues underlying the debate over market barriers, and discuss the importance of reconciling the findings of non-economic social sciences with the economic analysis of energy demand and consumer decision-making. We also scrutinize Sutherland`s attempt to apply finance theory to rationalize high implicit discount rates observed in energy-related choices, and find this use of finance theory to be inappropriate.

  18. On the economic analysis of problems in energy efficiency: Market barriers, market failures, and policy implications

    Energy Technology Data Exchange (ETDEWEB)

    Sanstad, A.H.; Koomey, J.G.; Levine, M.D.

    1993-01-01

    In his recent paper in The Energy Journal, Ronald Sutherland argues that several so-called market barriers'' to energy efficiency frequently cited in the literature are not market failures in the conventional sense and are thus irrelevant for energy policy. We argue that Sutherland has inadequately analyzed the idea of market barrier and misrepresented the policy implications of microeconomics. We find that economic theory, correctly interpreted, does not provide for the categorical dismissal of market barriers. We explore important methodological issues underlying the debate over market barriers, and discuss the importance of reconciling the findings of non-economic social sciences with the economic analysis of energy demand and consumer decision-making. We also scrutinize Sutherland's attempt to apply finance theory to rationalize high implicit discount rates observed in energy-related choices, and find this use of finance theory to be inappropriate.

  19. Secondary poisoning of cadmium, copper and mercury: implications for the Maximum Permissible Concentrations and Negligible Concentrations in water, sediment and soil

    NARCIS (Netherlands)

    Smit CE; van Wezel AP; Jager T; Traas TP; CSR

    2000-01-01

    De betekenis van doorvergiftiging voor de Maximum Toelaatbaar Risiconiveau's (MTRs) en Verwaarloosbaar Risiconiveau's (VRs) van cadmium, koper en kwik in water, sediment en bodem is geevalueerd. Veldgegevens met betrekking tot de accumulatie van deze elementen door vissen, mosselen en

  20. Secondary poisoning of cadmium, copper and mercury: implications for the Maximum Permissible Concentrations and Negligible Concentrations in water, sediment and soil

    NARCIS (Netherlands)

    Smit CE; Wezel AP van; Jager T; Traas TP; CSR

    2000-01-01

    The impact of secondary poisoning on the Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) of cadmium, copper and mercury in water, sediment and soil have been evaluated. Field data on accumulation of these elements by fish, mussels and earthworms were used to derive

  1. Towards a sustainable architecture: Adequate to the environment and of maximum energy efficiency; Hacia una arquitectura sustentable: adecuada al ambiente y de maxima eficiencia energetica

    Energy Technology Data Exchange (ETDEWEB)

    Morillon Galvez, David [Comision Nacional para el Ahorro de Energia, Mexico, D. F. (Mexico)

    1999-07-01

    An analysis of the elements and factors that the architecture of buildings must have to be sustainable, such as: a design adequate to the environment, saving and efficient use of alternate energies, and the auto-supply is presented. In addition a methodology for the natural air conditioning (bioclimatic architecture) of buildings, as well as ideas for the saving and efficient use of energy, with the objective of contributing to the adequate use of components of the building (walls, ceilings, floors etc.), is presented, that when interacting with the environment it takes advantage of it, without deterioration of the same, obtaining energy efficient designs. [Spanish] Se presenta un analisis de los elementos y factores que debe tener la arquitectura de edificios para ser sustentable, como; un diseno adecuado al ambiente, ahorro y uso eficiente de la energia, el uso de energias alternas y el autoabastecimiento. Ademas se propone una metodologia para la climatizacion natural (arquitectura bioclimatica) de edificios, asi como ideas para el ahorro y uso eficiente de energia, con el objetivo de aportar al uso adecuado de componentes del edificio (muros, techos, pisos etc.) que al interactuar con el ambiente tome ventaja de el, sin deterioro del mismo, logrando disenos energeticamente eficientes.

  2. Markets for energy efficiency: Exploring the implications of an EU-wide 'Tradable White Certificate' scheme

    International Nuclear Information System (INIS)

    Mundaca, Luis

    2008-01-01

    Recent developments in European energy policy reveal an increasing interest in implementing the so-called 'Tradable White Certificate' (TWC) schemes to improve energy efficiency. Based on three evaluation criteria (cost-effectiveness, environmental effectiveness and distributional equity) this paper analyses the implications of implementing a European-wide TWC scheme targeting the household and commercial sectors. Using a bottom-up model, quantitative results show significant cost-effective potentials for improvements (ca. 1400 TWh in cumulative energy savings by 2020), with the household sector, gas and space heating representing most of the TWC supply in terms of eligible sector, fuel and energy service demand, respectively. If a single market price of negative externalities is considered, a societal cost-effective potential of energy savings above 30% (compared to the baseline) is observed. In environmental terms, the resulting greenhouse gas emission reductions are around 200 Mt CO 2-eq by 2010, representing nearly 60% of the EU-Kyoto-target. From the qualitative perspective, several embedded ancillary benefits are identified (e.g. employment generation, improved comfort level, reduced 'fuel poverty', security of energy supply). Whereas an EU-wide TWC increases liquidity and reduces the risks of market power, autarky compliance strategies may be expected in order to capture co-benefits nationally. Cross subsidies could occur due to investment recovery mechanisms and there is a risk that effects may be regressive for low-income households. Assumptions undertaken by the modelling approach strongly indicate that high effectiveness of other policy instruments is needed for an EU-wide TWC scheme to be cost-effective

  3. Resource limits and conversion efficiency with implications for climate change and California's energy supply

    Science.gov (United States)

    Croft, Gregory Donald

    There are two commonly-used approaches to modeling the future supply of mineral resources. One is to estimate reserves and compare the result to extraction rates, and the other is to project from historical time series of extraction rates. Perceptions of abundant oil supplies in the Middle East and abundant coal supplies in the United States are based on the former approach. In both of these cases, an approach based on historical production series results in a much smaller resource estimate than aggregate reserve numbers. This difference is not systematic; natural gas production in the United States shows a strong increasing trend even though modest reserve estimates have resulted in three decades of worry about the gas supply. The implication of a future decline in Middle East oil production is that the market for transportation fuels is facing major changes, and that alternative fuels should be analyzed in this light. Because the U.S. holds very large coal reserves, synthesizing liquid hydrocarbons from coal has been suggested as an alternative fuel supply. To assess the potential of this process, one has to look at both the resource base and the net efficiency. The three states with the largest coal production declines in the 1996 to 2006 period are among the top 5 coal reserve holders, suggesting that gross coal reserves are a poor indicator of future production. Of the three categories of coal reserves reported by the U.S. Energy Information Administration, reserves at existing mines is the narrowest category and is approximately the equivalent of proved developed oil reserves. By this measure, Wyoming has the largest coal reserves in the U.S., and it accounted for all of U.S. coal production growth over the 1996 to 2006 time period. In Chapter 2, multi-cycle Hubbert curve analysis of historical data of coal production from 1850 to 2007 demonstrates that U.S. anthracite and bituminous coal are past their production peak. This result contradicts estimates based

  4. A cosmogenic 10Be chronology for the local last glacial maximum and termination in the Cordillera Oriental, southern Peruvian Andes: Implications for the tropical role in global climate

    Science.gov (United States)

    Bromley, Gordon R. M.; Schaefer, Joerg M.; Hall, Brenda L.; Rademaker, Kurt M.; Putnam, Aaron E.; Todd, Claire E.; Hegland, Matthew; Winckler, Gisela; Jackson, Margaret S.; Strand, Peter D.

    2016-09-01

    Resolving patterns of tropical climate variability during and since the last glacial maximum (LGM) is fundamental to assessing the role of the tropics in global change, both on ice-age and sub-millennial timescales. Here, we present a10Be moraine chronology from the Cordillera Carabaya (14.3°S), a sub-range of the Cordillera Oriental in southern Peru, covering the LGM and the first half of the last glacial termination. Additionally, we recalculate existing 10Be ages using a new tropical high-altitude production rate in order to put our record into broader spatial context. Our results indicate that glaciers deposited a series of moraines during marine isotope stage 2, broadly synchronous with global glacier maxima, but that maximum glacier extent may have occurred prior to stage 2. Thereafter, atmospheric warming drove widespread deglaciation of the Cordillera Carabaya. A subsequent glacier resurgence culminated at ∼16,100 yrs, followed by a second period of glacier recession. Together, the observed deglaciation corresponds to Heinrich Stadial 1 (HS1: ∼18,000-14,600 yrs), during which pluvial lakes on the adjacent Peruvian-Bolivian altiplano rose to their highest levels of the late Pleistocene as a consequence of southward displacement of the inter-tropical convergence zone and intensification of the South American summer monsoon. Deglaciation in the Cordillera Carabaya also coincided with the retreat of higher-latitude mountain glaciers in the Southern Hemisphere. Our findings suggest that HS1 was characterised by atmospheric warming and indicate that deglaciation of the southern Peruvian Andes was driven by rising temperatures, despite increased precipitation. Recalculated 10Be data from other tropical Andean sites support this model. Finally, we suggest that the broadly uniform response during the LGM and termination of the glaciers examined here involved equatorial Pacific sea-surface temperature anomalies and propose a framework for testing the viability

  5. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  6. A comparison of climate simulations for the last glacial maximum with three different versions of the ECHAM model and implications for summer-green tree refugia

    Directory of Open Access Journals (Sweden)

    K. Arpe

    2011-02-01

    Full Text Available Model simulations of the last glacial maximum (21 ± 2 ka with the ECHAM3 T42 atmosphere-only, ECHAM5-MPIOM T31 atmosphere-ocean coupled and ECHAM5 T106 atmosphere-only models are compared. The topography, land-sea mask and glacier distribution for the ECHAM5 simulations were taken from the Paleoclimate Modelling Intercomparison Project Phase II (PMIP2 data set while for ECHAM3 they were taken from PMIP1. The ECHAM5-MPIOM T31 model produced its own sea surface temperatures (SST while the ECHAM5 T106 simulations were forced at the boundaries by this coupled model SSTs corrected from their present-day biases and the ECHAM3 T42 model was forced with prescribed SSTs provided by Climate/Long-Range Investigation, Mapping, and Prediction project (CLIMAP.

    The SSTs in the ECHAM5-MPIOM simulation for the last glacial maximum (LGM were much warmer in the northern Atlantic than those suggested by CLIMAP or Overview of Glacial Atlantic Ocean Mapping (GLAMAP while the SSTs were cooler everywhere else. This had a clear effect on the temperatures over Europe, warmer for winters in western Europe and cooler for eastern Europe than the simulation with CLIMAP SSTs.

    Considerable differences in the general circulation patterns were found in the different simulations. A ridge over western Europe for the present climate during winter in the 500 hPa height field remains in both ECHAM5 simulations for the LGM, more so in the T106 version, while the ECHAM3 CLIMAP-SST simulation provided a trough which is consistent with cooler temperatures over western Europe. The zonal wind between 30° W and 10° E shows a southward shift of the polar and subtropical jets in the simulations for the LGM, least obvious in the ECHAM5 T31 one, and an extremely strong polar jet for the ECHAM3 CLIMAP-SST run. The latter can probably be assigned to the much stronger north-south gradient in the CLIMAP SSTs. The southward shift of the polar jet during the LGM is supported by

  7. Measuring the Efficiency of Education and Technology via DEA approach: Implications on National Development

    Directory of Open Access Journals (Sweden)

    Huan Xu

    2017-11-01

    Full Text Available The aim of this paper is to provide a new approach for assessing the input–output efficiency of education and technology for national science and education department. We used the Data Envelopment Analysis (DEA method to analyze the efficiency sharing activities in education and technology sector, and classify input variables and output variables accordingly. Using the panel data in the education and technology sector of 53 countries, we found that the countries with significant progress in educational efficiency and technological efficiency mainly concentrated in East Asia, especially in Japan, Korea, Taiwan and some other developing countries. We further evaluate the effect of educational and technological efficiencies on national competitiveness, balanced development of the country, national energy efficiency, export, and employment. We found that the efficiency of science and technology has an effect on the balanced development of the country, but that of education has played a counter-productive role; Educational efficiency has a large role and related the country’s educational development. In addition, using the panel data analysis, we showed that educational and technological efficiency has different degrees of contributions to the development from 2000 to 2014. It mainly depends on the economic development progress and the push for the education and technological policy. The proposed approach in this paper provides the decision-making support for the education and technological policy formulation, specially the selection of the appropriate education and technological strategies for resource allocation and process evaluation.

  8. Improving primary health care facility performance in Ghana: efficiency analysis and fiscal space implications.

    Science.gov (United States)

    Novignon, Jacob; Nonvignon, Justice

    2017-06-12

    Health centers in Ghana play an important role in health care delivery especially in deprived communities. They usually serve as the first line of service and meet basic health care needs. Unfortunately, these facilities are faced with inadequate resources. While health policy makers seek to increase resources committed to primary healthcare, it is important to understand the nature of inefficiencies that exist in these facilities. Therefore, the objectives of this study are threefold; (i) estimate efficiency among primary health facilities (health centers), (ii) examine the potential fiscal space from improved efficiency and (iii) investigate the efficiency disparities in public and private facilities. Data was from the 2015 Access Bottlenecks, Cost and Equity (ABCE) project conducted by the Institute for Health Metrics and Evaluation. The Stochastic Frontier Analysis (SFA) was used to estimate efficiency of health facilities. Efficiency scores were then used to compute potential savings from improved efficiency. Outpatient visits was used as output while number of personnel, hospital beds, expenditure on other capital items and administration were used as inputs. Disparities in efficiency between public and private facilities was estimated using the Nopo matching decomposition procedure. Average efficiency score across all health centers included in the sample was estimated to be 0.51. Also, average efficiency was estimated to be about 0.65 and 0.50 for private and public facilities, respectively. Significant disparities in efficiency were identified across the various administrative regions. With regards to potential fiscal space, we found that, on average, facilities could save about GH₵11,450.70 (US$7633.80) if efficiency was improved. We also found that fiscal space from efficiency gains varies across rural/urban as well as private/public facilities, if best practices are followed. The matching decomposition showed an efficiency gap of 0.29 between private

  9. Stumbling at the First Step: Efficiency Implications of Poor Performance in the Foundational First Five Years

    Science.gov (United States)

    Crouch, Luis; Merseth, Katherine A.

    2017-01-01

    This paper highlights patterns in school enrollment indicators that affect the efficiency and effectiveness of education systems in a set of low-income countries: those that have expanded access quickly in the last decade or two, but have not yet absorbed that expansion efficiently. Although the patterns in these indicators are observable in the…

  10. Risk Adjusted Production Efficiency of Maize Farmers in Ethiopia: Implication for Improved Maize Varieties Adoption

    Directory of Open Access Journals (Sweden)

    Sisay Diriba Lemessa

    2017-09-01

    Full Text Available This study analyzes the technical efficiency and production risk of 862 maize farmers in major maize producing regions of Ethiopia. It employs the stochastic frontier approach (SFA to estimate the level of technical efficiencies of stallholder farmers. The stochastic frontier approach (SFA uses flexible risk properties to account for production risk. Thus, maize production variability is assessed from two perspectives, the production risk and the technical efficiency. The study also attempts to determine the socio-economic and farm characteristics that influence technical efficiency of maize production in the study area. The findings of the study showed the existence of both production risk and technical inefficiency in maize production process. Input variables (amounts per hectare such as fertilizer and labor positively influence maize output. The findings also show that farms in the study area exhibit decreasing returns to scale. Fertilizer and ox plough days reduce output risk while labor and improved seed increase output risk. The mean technical efficiency for maize farms is 48 percent. This study concludes that production risk and technical inefficiency prevents the maize farmers from realizing their frontier output. The best factors that improve the efficiency of the maize farmers in the study area include: frequency of extension contact, access to credit and use of intercropping. It was also realized that altitude and terracing in maize farms had influence on farmer efficiency.

  11. AUDI Rebel - Aesthetic efficiency : A sporty exterior design that achieves maximum of aesthetic by using minimum amount of elements. An aesthetic efficient design that will appeal to Generation Z - the rebels with a cause.

    OpenAIRE

    Dragu, Sebastian - Mihai

    2014-01-01

    Nowadays, the use of Internet and smart technology on a daily basis is not just for being faster and more efficient in communication. It became a way of living that changed the way people think, read, play, shop, spend free time, meet people etc. Having many choices and greater access to a large online information pool, one became diligent researchers, always considering what a good investment is. Since there are many different products offering more or less the same functional benefits, a de...

  12. A Computational Model of Pattern Separation Efficiency in the Dentate Gyrus with Implications in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Faramarz eFaghihi

    2015-03-01

    Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.

  13. A computational model of pattern separation efficiency in the dentate gyrus with implications in schizophrenia

    Science.gov (United States)

    Faghihi, Faramarz; Moustafa, Ahmed A.

    2015-01-01

    Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed. PMID:25859189

  14. EFFICIENT QUANTITATIVE RISK ASSESSMENT OF JUMP PROCESSES: IMPLICATIONS FOR FOOD SAFETY

    OpenAIRE

    Nganje, William E.

    1999-01-01

    This paper develops a dynamic framework for efficient quantitative risk assessment from the simplest general risk, combining three parameters (contamination, exposure, and dose response) in a Kataoka safety-first model and a Poisson probability representing the uncertainty effect or jump processes associated with food safety. Analysis indicates that incorporating jump processes in food safety risk assessment provides more efficient cost/risk tradeoffs. Nevertheless, increased margin of safety...

  15. Efficiency improvement opportunities for personal computer monitors. Implications for market transformation programs

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Young; Phadke, Amol; Shah, Nihar [Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2013-08-15

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that PC monitor efficiency will likely improve by over 40 % by 2015 with saving potential of 4.5 TWh per year in 2015, compared to today's technology. We discuss various energy-efficiency improvement options and evaluate the cost-effectiveness of three of them, at least one of which improves efficiency by at least 20 % cost effectively beyond the ongoing market trends. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus-powered liquid crystal display monitors and find that the current technology available and deployed in them has the potential to deeply and cost effectively reduce energy consumption by as much as 50 %. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to further capture global energy saving potential from PC monitors which we estimate to be 9.2 TWh per year in 2015.

  16. Main determinants of efficiency and implications on banking concentration in the European Union

    Directory of Open Access Journals (Sweden)

    Rafael Bautista Mesa

    2014-01-01

    Full Text Available This study aims to measure the main determinants influencing bank efficiency. We suggest that the bank efficiency ratio, obtained from the income statement, is positively related to the size of a bank in terms of total assets. However, we believe that such a relationship cannot be maintained for banks over a certain size. By the use of the regression analysis method, we analyze the link between bank efficiency and bank size, using a sample of 3952 banks in the European Union. Our results show that the efficiency ratio stops improving for banks with total assets over $25 billion. Previous literature, using different analysis techniques, does not reach an agreement on this point. Furthermore, our study identifies further variables which negatively affect the efficiency of banks, such as competition and lending diversification, or affect them positively, such as the wholesale funding ratio and income diversification. Our findings imply the need for different bank policies depending on total assets, in order to limit the size and activities of banks.

  17. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-06-29

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

  18. Metropolitan density, energy efficiency and carbon emissions: Multi-attribute tradeoffs and their policy implications

    International Nuclear Information System (INIS)

    Clark, Thomas A.

    2013-01-01

    Of all the potential benefits of urban containment, compaction, and densification, just two are the central focus here: attainment of greater energy efficiency and reduction in carbon emissions. In cities these are largely associated with the transport and building sectors. This paper probes the form-efficiency relation in the transport sector across 57 census-defined urbanized areas in the United States in 2000. Thirty-six of the forty largest are included. Increase in core area population density is correlated with modest gain in energy efficiency in the urban transport sector and modest decrease in its carbon emissions. Densification's lagged effects related to travel rationalization and growth in transit receptivity may increase overall metro transport energy efficiency beyond the degree revealed here. These impacts are associated with two off-setting negative externalities: (1) diminished housing affordability, and (2) increased roadway congestion. Each may moderate over time. Such effects are non-additive, owing to a difference of metrics. Elevated CAFE standards provoking new transport technologies may reduce total energy consumption and associated emissions ceteris paribus, lessening densification's marginal efficiency payoff while magnifying the significance of densification's opportunity costs. Categories of policy interventions to promote metro-scale energy efficiencies and emissions reductions, with and without urban densification, conclude the paper. - Highlight: ► Transport VMT and Btu per capita are considered across 57 U.S. metro areas in 2000. ► Per capita VMT, Btu and vehicle emissions are inverse to metro core area population density. ► Interior road congestion and housing costs rise with core but not peripheral densification. ► Spatial non-density and aspatial transport approaches constitute alternate policy levers.

  19. Patchy zooplankton grazing and high energy conversion efficiency: ecological implications of sandeel behavior and strategy

    DEFF Research Database (Denmark)

    Deurs, Mikael van; Christensen, Asbjørn; Rindorf, Anna

    2013-01-01

    of prey. Here we studied zooplankton consumption and energy conversion efficiency of lesser sandeel (Ammodytes marinus) in the central North Sea, using stomach data, length and weight-at-age data, bioenergetics, and hydrodynamic modeling. The results suggested: (i) Lesser sandeel in the Dogger area depend...... sandeel densities and growth rates per area than larger habitats...

  20. Efficient degradation of gluten by a prolyl endoprotease in a gastrointestinal model: Implications for coeliac disease

    NARCIS (Netherlands)

    Mitea, C.; Havenaar, R.; Wouter Drijfhout, J.; Edens, L.; Dekking, L.; Koning, F.; Dekking, E.H.A.

    2008-01-01

    Background: Coeliac disease is caused by an immune response to gluten. As gluten proteins are proline rich they are resistant to enzymatic digestion in the gastrointestinal tract, a property that probably contributes to the immunogenic nature of gluten. Aims: This study determined the efficiency of

  1. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  2. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  3. Highly efficient forward osmosis based on porous membranes--applications and implications.

    Science.gov (United States)

    Qi, Saren; Li, Ye; Zhao, Yang; Li, Weiyi; Tang, Chuyang Y

    2015-04-07

    For the first time, forward osmosis (FO) was performed using a porous membrane with an ultrafiltration (UF)-like rejection layer and its feasibility for high performance FO filtration was demonstrated. Compared to traditional FO membranes with dense rejection layers, the UF-like FO membrane was 2 orders of magnitude more permeable. This gave rise to respectable FO water flux even at ultralow osmotic driving force, for example, 7.6 L/m(2).h at an osmotic pressure of merely 0.11 bar (achieved by using a 0.1% poly(sodium 4-styrene-sulfonate) draw solution). The membrane was applied to oil/water separation, and a highly stable FO water flux was achieved. The adoption of porous FO membranes opens a door to many new opportunities, with potential applications ranging from wastewater treatment, valuable product recovery, and biomedical applications. The potential applications and implications of porous FO membranes are addressed in this paper.

  4. Energy use and implications for efficiency strategies in global fluid-milk processing industry

    International Nuclear Information System (INIS)

    Xu Tengfang; Flapper, Joris

    2009-01-01

    The fluid-milk processing industry around the world processes approximately 60% of total raw milk production to create diverse fresh fluid-milk products. This paper reviews energy usage in existing global fluid-milk markets to identify baseline information that allows comparisons of energy performance of individual plants and systems. In this paper, we analyzed energy data compiled through extensive literature reviews on fluid-milk processing across a number of countries and regions. The study has found that the average final energy intensity of individual plants exhibited significant large variations, ranging from 0.2 to 12.6 MJ per kg fluid-milk product across various plants in different countries and regions. In addition, it is observed that while the majority of larger plants tended to exhibit higher energy efficiency, some exceptions existed for smaller plants with higher efficiency. These significant differences have indicated large potential energy-savings opportunities in the sector across many countries. Furthermore, this paper illustrates a positive correlation between implementing energy-monitoring programs and curbing the increasing trend in energy demand per equivalent fluid-milk product over time in the fluid-milk sector, and suggests that developing an energy-benchmarking framework, along with promulgating new policy options should be pursued for improving energy efficiency in global fluid-milk processing industry.

  5. Influence of methane emissions and vehicle efficiency on the climate implications of heavy-duty natural gas trucks.

    Science.gov (United States)

    Camuzeaux, Jonathan R; Alvarez, Ramón A; Brooks, Susanne A; Browne, Joshua B; Sterner, Thomas

    2015-06-02

    While natural gas produces lower carbon dioxide emissions than diesel during combustion, if enough methane is emitted across the fuel cycle, then switching a heavy-duty truck fleet from diesel to natural gas can produce net climate damages (more radiative forcing) for decades. Using the Technology Warming Potential methodology, we assess the climate implications of a diesel to natural gas switch in heavy-duty trucks. We consider spark ignition (SI) and high-pressure direct injection (HPDI) natural gas engines and compressed and liquefied natural gas. Given uncertainty surrounding several key assumptions and the potential for technology to evolve, results are evaluated for a range of inputs for well-to-pump natural gas loss rates, vehicle efficiency, and pump-to-wheels (in-use) methane emissions. Using reference case assumptions reflecting currently available data, we find that converting heavy-duty truck fleets leads to damages to the climate for several decades: around 70-90 years for the SI cases, and 50 years for the more efficient HPDI. Our range of results indicates that these fuel switches have the potential to produce climate benefits on all time frames, but combinations of significant well-to-wheels methane emissions reductions and natural gas vehicle efficiency improvements would be required.

  6. Efficiency potentials of heat pumps with combined heat and power. For maximum reduction of CO2 emissions and for electricity generation from fossil fuels with CO2 reduction in Switzerland

    International Nuclear Information System (INIS)

    Rognon, F.

    2005-06-01

    This comprehensive report for the Swiss Federal Office of Energy (SFOE) takes a look at how the efficiency potential of heat pumps together with combined heat and power systems can help provide a maximum reduction of CO 2 emissions and provide electricity generation from fossil fuel in Switzerland together with reductions in CO 2 emissions. In Switzerland, approximately 80% of the low-temperature heat required for space-heating and for the heating-up of hot water is produced by burning combustibles. Around a million gas and oil boilers were in use in Switzerland in 2000, and these accounted for approximately half the country's 41.1 million tonnes of CO 2 emissions. The authors state that there is a more efficient solution with lower CO 2 emissions: the heat pump. With the enormous potential of our environment it would be possible to replace half the total number of boilers in use today with heat pumps. This would be equivalent to 90 PJ p.a. of useful heat, or 500,000 systems. The power source for heat pumps should come from the substitution of electric heating systems (electric resistor-based systems) and from the replacement of boilers. This should be done by using combined heat and power systems with full heat utilisation. This means, according to the authors, that the entire required power source can be provided without the need to construct new electricity production plants. The paper examines and discusses the theoretical, technical, market and realisable potentials

  7. Asymmetric learning by doing and dynamically efficient policy: implications for domestic and international emissions permit trading of allocating permits usefully

    International Nuclear Information System (INIS)

    Read, Peter

    2000-01-01

    Learning by doing leads to cost reductions as suppliers move down the 'experience curve'. This results in a beneficial supply side inter-temporal externality that, for dynamic efficiency, requires a higher incentive for abatement innovations than the penalty on emissions. This effect can be achieved by a dedicated emissions tax or by a proportionate abatement obligation or by allocating permits usefully. The latter arrangement is compatible with the effective cap on emissions that is secured by an emissions trading scheme. Each of the three possibilities results in a reduced loss of international competitivity in policy-committed regions, in less 'leakage, and in more technology transfer. Implications for trading in emissions permits and in project-related credits are discussed. (Author)

  8. Charge transfer complex states in diketopyrrolopyrrole polymers and fullerene blends: Implications for organic solar cell efficiency

    Science.gov (United States)

    Moghe, D.; Yu, P.; Kanimozhi, C.; Patil, S.; Guha, S.

    2011-12-01

    The spectral photocurrent characteristics of two donor-acceptor diketopyrrolopyrrole (DPP)-based copolymers (PDPP-BBT and TDPP-BBT) blended with a fullerene derivative [6,6]-phenyl C61-butyric acid methyl ester (PCBM) were studied using Fourier-transform photocurrent spectroscopy (FTPS) and monochromatic photocurrent (PC) method. PDPP-BBT:PCBM shows the onset of the lowest charge transfer complex (CTC) state at 1.42 eV, whereas TDPP-BBT:PCBM shows no evidence of the formation of a midgap CTC state. The FTPS and PC spectra of P3HT:PCBM are also compared. The larger singlet state energy difference of TDPP-BBT and PCBM compared to PDPP-BBT/P3HT and PCBM obliterates the formation of a midgap CTC state resulting in an enhanced photovoltaic efficiency over PDPP-BBT:PCBM.

  9. Charge transfer complex in diketopyrrolopyrrole polymers and fullerene blends: Implication for organic solar cell efficiency

    Science.gov (United States)

    Moghe, D.; Yu, P.; Kanimozhi, C.; Patil, S.; Guha, S.

    2012-02-01

    Copolymers based on diketopyrrolopyrrole (DPP) have recently gained potential in organic photovoltaics. When blended with another acceptor such as PCBM, intermolecular charge transfer occurs which may result in the formation of charge transfer (CT) states. We present here the spectral photocurrent characteristics of two donor-acceptor DPP based copolymers, PDPP-BBT and TDPP-BBT, blended with PCBM to identify the CT states. The spectral photocurrent measured using Fourier-transform photocurrent spectroscopy (FTPS) and monochromatic photocurrent (PC) methods are compared with P3HT:PCBM, where the CT state is well known. PDPP-BBT:PCBM shows a stable CT state while TDPP-BBT does not. Our analysis shows that the larger singlet state energy difference between TDPP-BBT and PCBM along with the lower optical gap of TDPP-BBT obliterates the formation of a midgap CT state resulting in an enhanced photovoltaic efficiency over PDPP-BBT:PCBM.

  10. Tactical and operational decisions for operating room planning: efficiency and welfare implications.

    Science.gov (United States)

    Testi, Angela; Tànfani, Elena

    2009-12-01

    In this paper, we evaluate the impact on welfare implications of a 0-1 linear programming model to solve the Operating Room (OR) planning problem, taking a patient perspective. In particular, given a General Surgery Department made up of different surgical sub-specialties sharing a given number of OR block times, the model determines, during a given planning period, the allocation of those blocks to surgical sub-specialties, i.e. the so called Master Surgical Schedule Problem (MSSP), together with the subsets of elective patients to be operated on in each block time, i.e. the so called Surgical Case Assignment Problem (SCAP). The innovation of the model is two-fold. The first is that OR allocation is "optimal" if the available OR blocks are scheduled simultaneously to the proper subspecialty, at the proper time to the proper patient. The second is defining what "proper" means and include that in the objective function. In our approach what is important is not number of patients who can be treated in a given period but how much welfare loss, due to clinical deterioration or other negative consequences related to excessive waiting, can be prevented. In other words we assume a societal perspective in that we focus on "outcome" (health improving or preventing from worsening) rather than on "output" (delivered procedures). The model can be used both to develop weekly OR planning with given resources (operational decision), and to perform "what if" scenario analysis regarding how to increase the amount of OR time available for the entire department (tactical decision). The model performance is verified by applying it to a real scenario, the elective admissions of the General Surgery Department of the San Martino University Hospital in Genova (Italy). Despite the complexity of this NP-hard combinatorial optimization problem, computational results indicate that the model can solve all test problems within 600 s and an average optimality tolerance of less than 0.01%.

  11. Cratering efficiency on coarse-grain targets: Implications for the dynamical evolution of asteroid 25143 Itokawa

    Science.gov (United States)

    Tatsumi, Eri; Sugita, Seiji

    2018-01-01

    Remote sensing observations made by the spacecraft Hayabusa provided the first direct evidence of a rubble-pile asteroid: 25143 Itokawa. Itokawa was found to have a surface structure very different from other explored asteroids; covered with coarse pebbles and boulders ranging at least from cm to meter size. The cumulative size distribution of small circular depressions on Itokawa, most of which may be of impact origin, has a significantly shallower slope than that on the Moon; small craters are highly depleted on Itokawa compared to the Moon. This deficiency of small circular depressions and other features, such as clustered fragments and pits on boulders, suggest that the boulders on Itokawa might behave like armor, preventing crater formation: the ;armoring effect;. This might contribute to the low number density of small crater candidates. In this study, the cratering efficiency reduction due to coarse-grained targets was investigated based on impact experiments at velocities ranging from ∼ 70 m/s to ∼ 6 km/s using two vertical gas gun ranges. We propose a scaling law extended for cratering on coarse-grained targets (i.e., target grain size ≳ projectile size). We have found that the crater efficiency reduction is caused by energy dissipation at the collision site where momentum is transferred from the impactor to the first-contact target grain, and that the armoring effect can be classified into three regimes: (1) gravity scaled regime, (2) reduced size crater regime, or (3) no apparent crater regime, depending on the ratio of the impactor size to the target grain size and the ratio of the impactor kinetic energy to the disruption energy of a target grain. We found that the shallow slope of the circular depressions on Itokawa cannot be accounted for by this new scaling law, suggesting that obliteration processes, such as regolith convection and migration, play a greater role in the depletion of circular depressions on Itokawa. Based on the new extended

  12. Purchasing-power-parity (PPP) approach to energy-efficiency measurement: implications for energy and environmental policy

    International Nuclear Information System (INIS)

    Birol, Fatih; Okogu, B.E.

    1997-01-01

    The weaknesses of the traditional measure of national output are well known and, in recent years, efforts to find more appropriate alternatives have intensified. One such methodology is the PPP approach which may capture the real value of the GDP. In general, this approach raises the incomes of developing countries by a substantial amount, and this has serious implications for energy indicators on which policies are usually based. A further problem is that non-commercial energy is usually left out of energy-intensity calculations. We analyze the issue of energy-efficiency and carry out calculations based on three approaches: the traditional approach, the PPP-based income approach and an approach which includes non-commercial energy. The results confirm the limitations of using the PPP approach, as its results in a spuriously high energy-efficiency level suggesting high technological sophistication for developing countries. The inclusion of non-commercial energy gives more complete picture. The main conclusion is that applying the PPP method in energy-intensity calculations may be misleading. (Author)

  13. Global and regional phosphorus budgets in agricultural systems and their implications for phosphorus-use efficiency

    Directory of Open Access Journals (Sweden)

    F. Lun

    2018-01-01

    Full Text Available The application of phosphorus (P fertilizer to agricultural soils increased by 3.2 % annually from 2002 to 2010. We quantified in detail the P inputs and outputs of cropland and pasture and the P fluxes through human and livestock consumers of agricultural products on global, regional, and national scales from 2002 to 2010. Globally, half of the total P inputs into agricultural systems accumulated in agricultural soils during this period, with the rest lost to bodies of water through complex flows. Global P accumulation in agricultural soil increased from 2002 to 2010 despite decreases in 2008 and 2009, and the P accumulation occurred primarily in cropland. Despite the global increase in soil P, 32 % of the world's cropland and 43 % of the pasture had soil P deficits. Increasing soil P deficits were found for African cropland vs. increasing P accumulation in eastern Asia. European and North American pasture had a soil P deficit because the continuous removal of biomass P by grazing exceeded P inputs. International trade played a significant role in P redistribution among countries through the flows of P in fertilizer and food among countries. Based on country-scale budgets and trends we propose policy options to potentially mitigate regional P imbalances in agricultural soils, particularly by optimizing the use of phosphate fertilizer and the recycling of waste P. The trend of the increasing consumption of livestock products will require more P inputs to the agricultural system, implying a low P-use efficiency and aggravating P-stock scarcity in the future. The global and regional phosphorus budgets and their PUEs in agricultural systems are publicly available at https://doi.pangaea.de/10.1594/PANGAEA.875296.

  14. Low transient storage and uptake efficiencies in seven agricultural streams: implications for nutrient demand.

    Science.gov (United States)

    Sheibley, Richard W; Duff, John H; Tesoriero, Anthony J

    2014-11-01

    We used mass load budgets, transient storage modeling, and nutrient spiraling metrics to characterize nitrate (NO), ammonium (NH), and inorganic phosphorus (SRP) demand in seven agricultural streams across the United States and to identify in-stream services that may control these conditions. Retention of one or all nutrients was observed in all but one stream, but demand for all nutrients was low relative to the mass in transport. Transient storage metrics (/, , , and ) correlated with NO retention but not NH or SRP retention, suggesting in-stream services associated with transient storage and stream water residence time could influence reach-scale NO demand. However, because the fraction of median reach-scale travel time due to transient storage () was ≤1.2% across the sites, only a relatively small demand for NO could be generated by transient storage. In contrast, net uptake of nutrients from the water column calculated from nutrient spiraling metrics were not significant at any site because uptake lengths calculated from background nutrient concentrations were statistically insignificant and therefore much longer than the study reaches. These results suggest that low transient storage coupled with high surface water NO inputs have resulted in uptake efficiencies that are not sufficient to offset groundwater inputs of N. Nutrient retention has been linked to physical and hydrogeologic elements that drive flow through transient storage areas where residence time and biotic contact are maximized; however, our findings indicate that similar mechanisms are unable to generate a significant nutrient demand in these streams relative to the loads. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  15. Global and regional phosphorus budgets in agricultural systems and their implications for phosphorus-use efficiency

    Science.gov (United States)

    Lun, Fei; Liu, Junguo; Ciais, Philippe; Nesme, Thomas; Chang, Jinfeng; Wang, Rong; Goll, Daniel; Sardans, Jordi; Peñuelas, Josep; Obersteiner, Michael

    2018-01-01

    The application of phosphorus (P) fertilizer to agricultural soils increased by 3.2 % annually from 2002 to 2010. We quantified in detail the P inputs and outputs of cropland and pasture and the P fluxes through human and livestock consumers of agricultural products on global, regional, and national scales from 2002 to 2010. Globally, half of the total P inputs into agricultural systems accumulated in agricultural soils during this period, with the rest lost to bodies of water through complex flows. Global P accumulation in agricultural soil increased from 2002 to 2010 despite decreases in 2008 and 2009, and the P accumulation occurred primarily in cropland. Despite the global increase in soil P, 32 % of the world's cropland and 43 % of the pasture had soil P deficits. Increasing soil P deficits were found for African cropland vs. increasing P accumulation in eastern Asia. European and North American pasture had a soil P deficit because the continuous removal of biomass P by grazing exceeded P inputs. International trade played a significant role in P redistribution among countries through the flows of P in fertilizer and food among countries. Based on country-scale budgets and trends we propose policy options to potentially mitigate regional P imbalances in agricultural soils, particularly by optimizing the use of phosphate fertilizer and the recycling of waste P. The trend of the increasing consumption of livestock products will require more P inputs to the agricultural system, implying a low P-use efficiency and aggravating P-stock scarcity in the future. The global and regional phosphorus budgets and their PUEs in agricultural systems are publicly available at https://doi.pangaea.de/10.1594/PANGAEA.875296.

  16. Habitat reclamation plan to mitigate for the loss of habitat due to oil and gas production activities under maximum efficient rate, Naval Petroleum Reserve No. 1, Kern County, California

    International Nuclear Information System (INIS)

    Anderson, D.C.

    1994-11-01

    Activities associated with oil and gas development under the Maximum Efficiency Rate (MER) from 1975 to 2025 will disturb approximately 3,354 acres. Based on 1976 aerial photographs and using a dot grid methodology, the amount of land disturbed prior to MER is estimated to be 3,603 acres. Disturbances on Naval Petroleum Reserve No. 1 (NPR-1) were mapped using 1988 aerial photography and a geographical information system. A total of 6,079 acres were classified as disturbed as of June, 1988. The overall objective of this document is to provide specific information relating to the on-site habitat restoration program at NPRC. The specific objectives, which relate to the terms and conditions that must be met by DOE as a means of protecting the San Joaquin kit fox from incidental take are to: (1) determine the amount and location of disturbed lands on NPR-1 and the number of acres disturbed as a result of MER activities, (2) develop a long term (10 year) program to restore an equivalent on-site acres to that lost from prior project-related actions, and (3) examine alternative means to offset kit fox habitat loss

  17. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  18. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  19. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  20. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  1. Implications of the energy efficiency in the attenuation of environmental impacts and the conservation of the energy: The case of the Thermal Power stations to Gas in Colombia

    International Nuclear Information System (INIS)

    Amell A, A.; Cadavid, F.J.

    1999-01-01

    In the present work a comparative analysis is done about the implication for our country, from a point of view of energetic sources conservation and environmental impact, of the execution of natural gas thermal projects with high and low efficiency technology

  2. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  3. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  4. Diagnostic efficiency of abattoir meat inspection service in Ethiopia to detect carcasses infected with Mycobacterium bovis: implications for public health.

    Science.gov (United States)

    Biffa, Demelash; Bogale, Asseged; Skjerve, Eystein

    2010-08-06

    Bovine Tuberculosis (BTB) is a widespread and endemic disease of cattle in Ethiopia posing a significant threat to public health. Regular surveillance by skin test, bacteriology and molecular methods is not feasible due to lack of resource. Thus, routine abattoir (RA) inspection will continue to play a key role for national surveillance. We evaluated efficiency of RA inspection for diagnosis of Mycobacterium bovis infection and discussed its public health implications in light of a high risk of human exposure. The study was conducted in five abattoirs: Addis Ababa, Adama, Hawassa, Yabello and Melge-Wondo abattoirs. The efficiency of routine abattoir (RA) inspection was validated in comparison to detailed abattoir (DA) inspection, followed by culture and microscopy (CM) and region of difference (RD) deletion analysis. Diagnostic accuracies (with corresponding measures of statistical uncertainty) were determined by computing test property statistics (sensitivity and specificity) and likelihood estimations using web-based SISA diagnostic statistics software. Post-test probability of detecting TB infected carcasses was estimated using nomograms. Agreement between RA and DA inspections was measured using kappa statistics. The study was conducted and reported in accordance with standards for reporting of diagnostic accuracy (STARD) requirements. Both routine and detailed meat inspection protocols were performed on a subpopulation of 3322 cattle selected randomly from among 78,269 cattle slaughtered during the study period. Three hundred thirty seven carcasses identified through detailed meat inspection protocols were subjected to culture and microscopy; of the 337, a subset of 105 specimens for culture and microscopy were subjected to further molecular testing. There was a substantial agreement between RA and DA inspections in Addis Ababa (Kappa = 0.7) and Melge-Wondo abattoirs (Kappa = 0.67). In Adama, Hawassa and Yabello abattoirs, the agreement was however poor (Kappa

  5. Political economy constraints on carbon pricing policies: What are the implications for economic efficiency, environmental efficacy, and climate policy design?

    International Nuclear Information System (INIS)

    Jenkins, Jesse D.

    2014-01-01

    Economists traditionally view a Pigouvian fee on carbon dioxide and other greenhouse gas emissions, either via carbon taxes or emissions caps and permit trading (“cap-and-trade”), as the economically optimal or “first-best” policy to address climate change-related externalities. Yet several political economy factors can severely constrain the implementation of these carbon pricing policies, including opposition of industrial sectors with a concentration of assets that would lose considerable value under such policies; the collective action nature of climate mitigation efforts; principal agent failures; and a low willingness-to-pay for climate mitigation by citizens. Real-world implementations of carbon pricing policies can thus fall short of the economically optimal outcomes envisioned in theory. Consistent with the general theory of the second-best, the presence of binding political economy constraints opens a significant “opportunity space” for the design of creative climate policy instruments with superior political feasibility, economic efficiency, and environmental efficacy relative to the constrained implementation of carbon pricing policies. This paper presents theoretical political economy frameworks relevant to climate policy design and provides corroborating evidence from the United States context. It concludes with a series of implications for climate policy making and argues for the creative pursuit of a mix of second-best policy instruments. - Highlights: • Political economy constraints can bind carbon pricing policies. • These constraints can prevent implementation of theoretically optimal carbon prices. • U.S. household willingness-to-pay for climate policy likely falls in the range of $80–$200 per year. • U.S. carbon prices may be politically constrained to as low as $2–$8 per ton of CO 2 . • An opportunity space exists for improvements in climate policy design and outcomes

  6. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  7. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  8. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  9. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  10. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  11. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  12. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  13. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  14. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  15. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  16. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  17. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  18. Maximum likelihood of phylogenetic networks.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2006-11-01

    Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

  19. Question structure impacts efficiency and performance in an interactive guessing game: implications for strategy engagement and executive functioning.

    Science.gov (United States)

    Longenecker, Julia; Liu, Kristy; Chen, Eric Y H

    2012-12-30

    In an interactive guessing game, controls had higher performance and efficiency than patients with schizophrenia in correct trials. Patients' difficulties generating efficient questions suggest an increased taxation of working memory and an inability to engage an appropriate strategy, leading to impulsive behavior and reduced success. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Climatic implications of glacial evolution in the Tröllaskagi peninsula (northern Iceland) since the Little Ice Age maximum. The cases of the Gljúfurárjökull and Tungnahryggsjökull glaciers

    Science.gov (United States)

    Fernández-Fernández, José M.; Andrés, Nuria; Brynjólfsson, Skafti; Sæmundsson, Þorsteinn; Palacios, David

    2017-04-01

    The Tröllaskagi peninsula is located in northern Iceland, between meridians 19°30'W and 18°10'W, jutting out into the North Atlantic to latitude 66°12'N and joining the central highlands to the south. About 150 glaciers located on the Tröllaskagi peninsula reached their Holocene maximum extent during the Little Ice Age (LIA) maximum at the end of the 19th century. The sudden warming at the turn of the 20th century triggered a continuous retreat from the LIA maximum positions, interrupted by a reversal trend during the mid-seventies and eighties in response to a brief period of climate cooling. The aim of this paper is to analyze the relationships between glacial and climatic evolution since the LIA maximum. For this reason, we selected three small debris-free glaciers: Gljúfurárjökull, and western and eastern Tungnahryggsjökull, at the headwalls of Skíðadalur and Kolbeinsdalur, as their absence of debris cover makes them sensitive to climatic fluctuations. To achieve this purpose, we used ArcGIS to map the glacier extent during the LIA maximum and several dates over four georeferenced aerial photos (1946, 1985, 1994 and 2000), as well as a 2005 SPOT satellite image. Then, the Equilibrium-Line Altitude (ELA) was calculated by applying the Accumulation Area Ratio (AAR) and Area Altitude Balance Ratio (AABR) approaches. Climatological data series from the nearby weather stations were used in order to analyze climate development and to estimate precipitation at the ELA with different numerical models. Our results show considerable changes of the three debris-free glaciers and demonstrates their sensitivity to climatic fluctuations. As a result of the abrupt climatic transition of the 20th century, the following warm 25-year period and the warming started in the late eighties, the three glaciers retreated by ca. 990-1330 m from the LIA maximum to 2005, supported by a 40-metre ELA rise and a reduction of their area and volume of 25% and 33% on average

  1. Sharp Reduction in Maximum LEU Fuel Temperatures during Loss of Coolant Accidents in a PBMR DPP-400 core by means of Optimised Placement of Neutron Poisons: Implications for Pu fuel-cycles

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.

    2013-01-01

    The optimisation of the power profiles by means of placing an optimised distribution of neutron poison concentrations in the central reflector resulted in a large reduction in the maximum DLOFC temperature, which may produce far reaching safety and licensing benefits. Unfortunately this came at the expense of losing the ability to execute effective load following. The neutron poisons also caused a large reduction of 22% in the average burn-up of the fuel. Further optimisation is required to counter this reduction in burn-up

  2. How much carbon offsetting and where? Implications of efficiency, effectiveness, and ethicality considerations for public opinion formation

    International Nuclear Information System (INIS)

    Anderson, Brilé; Bernauer, Thomas

    2016-01-01

    A fundamental policy design choice in government-led climate change mitigation is: what role should flexibility mechanisms like carbon offsetting play in reducing greenhouse gas (GHG) emissions. Since public opinion affects the policy choices of government, we investigate how arguments regarding carbon offsetting's economic efficiency, effectiveness, and ethicality, which have been key points in the public debate, impact the public's preferences. We fielded an online framing experiment in the United States (N=995) to empirically identify how arguments for and against carbon offsetting influence public preferences for the inclusion of offsetting in national GHG mitigation policy. We find that the public's support for international offsetting increases and support for reductions at their source (i.e. within firms' own operations) diminishes when considerations of economic efficiency gains are at the forefront. Support for offsetting declines when individuals are confronted with arguments concerning its effectiveness and ethicality, which suggests that future policies will require clear standards of additionality in order to address these concerns. Moreover, we find that how carbon offsetting is framed matters even amongst climate skeptics and support could potentially be enhanced via improved communication on efficiency gains. - Highlights: •We use a framing survey experiment to study public opinion on carbon offsetting. •Efficiency gains increase public support for international carbon offsetting. •Concerns about effectiveness/additionality and ethicality reduce support. •More information on efficiency gains and strengthening additionality could help increase support.

  3. The relationship between house size and life cycle energy demand: Implications for energy efficiency regulations for buildings

    International Nuclear Information System (INIS)

    Stephan, André; Crawford, Robert H.

    2016-01-01

    House size has significantly increased over the recent decades in many countries. Larger houses often have a higher life cycle energy demand due to their increased use of materials and larger area to heat, cool and light. Yet, most energy efficiency regulations for buildings fail to adequately include requirements for addressing the energy demand associated with house size. This study quantifies the effect of house size on life cycle energy demand in order to inform future regulations. It uses a parametric model of a typical detached house in Melbourne, Australia and varies its floor area from 100 to 392 m"2 for four different household sizes. Both initial and recurrent embodied energy requirements are quantified using input-output-based hybrid analysis and operational energy is calculated in primary energy terms over 50 years. Results show that the life cycle energy demand increases at a slower rate compared to house size. Expressing energy efficiency per m"2 therefore favours large houses while these require more energy. Also, embodied energy represents 26–50% across all variations. Building energy efficiency regulations should incorporate embodied energy, correct energy intensity thresholds for house size and use multiple functional units to measure efficiency. These measures may help achieve greater net energy reductions. - Highlights: • The life cycle energy demand (LCE) is calculated for 90 house sizes and 4 household sizes. • The LCE is sublinearly correlated with house size. • Larger houses appear to be more energy efficient per m"2 while they use more energy overall. • Embodied energy (EE) represents up to 52% of the LCE over 50 years. • Building energy efficiency regulations need to consider house size and EE.

  4. Long-Term Urban Growth and Land Use Efficiency in Southern Europe: Implications for Sustainable Land Management

    Directory of Open Access Journals (Sweden)

    Marco Zitti

    2015-03-01

    Full Text Available The present study illustrates a multidimensional analysis of an indicator of urban land use efficiency (per-capita built-up area, LUE in mainland Attica, a Mediterranean urban region, along different expansion waves (1960–2010: compaction and densification in the 1960s, dispersed growth along the coasts and on Athens’ fringe in the 1970s, fringe consolidation in the 1980s, moderate re-polarization and discontinuous expansion in the 1990s and sprawl in remote areas in the 2000s. The non-linear trend in LUE (a continuous increase up to the 1980s and a moderate decrease in 1990 and 2000 preceding the rise observed over the last decade reflects Athens’ expansion waves. A total of 23 indicators were collected by decade for each municipality of the study area with the aim of identifying the drivers of land use efficiency. In 1960, municipalities with low efficiency in the use of land were concentrated on both coastal areas and Athens’ fringe, while in 2010, the lowest efficiency rate was observed in the most remote, rural areas. Typical urban functions (e.g., mixed land uses, multiple-use buildings, vertical profile are the variables most associated with high efficiency in the use of land. Policies for sustainable land management should consider local and regional factors shaping land use efficiency promoting self-contained expansion and more tightly protecting rural and remote land from dispersed urbanization. LUE is a promising indicator reflecting the increased complexity of growth patterns and may anticipate future urban trends.

  5. Economic efficiency and cost implications of habitat conservation: An example in the context of the Edwards Aquifer region

    Science.gov (United States)

    Gillig, Dhazn; McCarl, Bruce A.; Jones, Lonnie L.; Boadu, Frederick

    2004-04-01

    Groundwater management in the Edwards Aquifer in Texas is in the process of moving away from a traditional right of capture economic regime toward a more environmentally sensitive scheme designed to preserve endangered species habitats. This study explores economic and environmental implications of proposed groundwater management and water development strategies under a proposed regional Habitat Conservation Plan. Results show that enhancing the habitat by augmenting water flow costs $109-1427 per acre-foot and that regional water development would be accelerated by the more extreme possibilities under the Habitat Conservation Plan. The findings also indicate that a water market would improve regional welfare and lower water development but worsen environmental attributes.

  6. Consumer preferences and willingness to pay for compact fluorescent lighting: Policy implications for energy efficiency promotion in Saint Lucia

    International Nuclear Information System (INIS)

    Reynolds, Travis; Kolodinsky, Jane; Murray, Byron

    2012-01-01

    This article examines consumer willingness to pay for energy-saving compact fluorescent light bulbs using the results of a stated preferences study conducted in the Caribbean island nation of Saint Lucia. Geographic location, low income status, and age are found to affect willingness-to-pay for compact fluorescent lighting, while higher income status and other demographic variables appear to have minimal or no significant impacts. Energy efficiency knowledge is associated with increased willingness-to-pay for energy-efficient bulbs and with increased use of compact fluorescent lighting. Contrary to theoretical expectations, past purchase of compact fluorescent bulbs is found to have no impact on self-reported willingness to pay. We hypothesize that this null result is due to the recent emergence of low-cost, low-quality compact fluorescent bulbs in the Saint Lucian lighting market, which may be negatively influencing consumers' preferences and expectations regarding energy-efficient lighting. Findings support the argument that government-sponsored education and subsidy programs will likely result in increased use of energy-saving technologies in Saint Lucia. But such behavioral changes may not be sustained in the long run unless low quality bulbs – the “lemons” of the compact fluorescent lighting market – can be clearly identified by consumers. - Highlights: ▶ We model how knowledge, attitudes, and past purchase affect CFL adoption. ▶ Saint Lucian consumers have some knowledge of and favorable attitudes toward CFLs. ▶ Energy efficiency knowledge increases stated willingness-to-pay (WTP) for CFLs. ▶ Past purchase does not increase WTP; low-quality ‘lemons’ may influence consumers. ▶ Policy can lower consumer risks in lighting markets where low quality bulbs exist.

  7. Release of Corrosive Species above the Grate in a Waste Boiler and the Implication for Improved Electrical Efficiency

    DEFF Research Database (Denmark)

    Bøjer, Martin; Jensen, Peter Arendt; Dam-Johansen, Kim

    2010-01-01

    A relatively low electrical efficiency of 20−25% is obtained in typical west European waste boilers. Ash species released from the grate combustion zone form boiler deposits with high concentrations of Cl, Na, K, Zn, Pb, and S that cause corrosion of superheater tubes at high temperature....... The superheater steam temperature has to be limited to around 425 °C, and thereby, the electrical efficiency remains low compared to wood or coal-fired boilers. If a separate part of the flue gas from the grate has a low content of corrosive species, it may be used to superheat steam to a higher temperature......, and thereby, the electrical efficiency of the plant can be increased. In this study, the local temperature, the gas concentrations of CO, CO2, and O2, and the release of the volatile elements Cl, S, Na, K, Pb, Zn, Cu, and Sn were measured above the grate in a waste boiler to investigate if a selected fraction...

  8. Effect of pre-heating on the chemical oxidation efficiency: implications for the PAH availability measurement in contaminated soils.

    Science.gov (United States)

    Biache, Coralie; Lorgeoux, Catherine; Andriatsihoarana, Sitraka; Colombano, Stéfan; Faure, Pierre

    2015-04-09

    Three chemical oxidation treatments (KMnO4, H2O2 and Fenton-like) were applied on three PAH-contaminated soils presenting different properties to determine the potential use of these treatments to evaluate the available PAH fraction. In order to increase the available fraction, a pre-heating (100 °C under N2 for one week) was also applied on the samples prior oxidant addition. PAH and extractable organic matter contents were determined before and after treatment applications. KMnO4 was efficient to degrade PAHs in all the soil samples and the pre-heating slightly improved its efficiency. H2O2 and Fenton-like treatments presented low efficiency to degrade PAH in the soil presenting poor PAH availability, however, the PAH degradation rates were improved with the pre-heating. Consequently H2O2-based treatments (including Fenton-like) are highly sensitive to contaminant availability and seem to be valid methods to estimate the available PAH fraction in contaminated soils. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Policy implications of the purchasing intentions towards energy-efficient appliances among China’s urban residents: Do subsidies work?

    International Nuclear Information System (INIS)

    Wang, Zhaohua; Wang, Xiaomeng; Guo, Dongxue

    2017-01-01

    Incentive policies are always used to sway purchase, retail stocking, and production decisions toward energy-efficient products by many countries or regions. So the effectiveness of such subsidies has been of much concern to scholars. This research focused on whether, or not, subsidy policies have guided people's intentions and behaviours. We investigated 436 urban residents from 22 provinces in China, covering the seven major geographic regions, and made an empirical analysis of the factors influencing Chinese urban residents’ purchasing intentions towards energy-efficient appliances based on the structural equation model. On theoretical aspect, we developed the theory of planned behaviour. Our results show that the variable “POLICY” is insignificant which indicates that policy environment and media propaganda in China do not have significant effect on Chinese residents’ willingness to pay for energy-efficient appliances. While, the residents’ environmental awareness, past purchasing experiences, social relationships, age, and level of education all exert a significant influence on Chinese residents’ purchasing intentions. Finally, based on the above research results, the corresponding policy suggestions which mainly focus on the time of subsidy, the object of subsidy and the method of subsidy are offered for policy makers. - Highlights: • We researched people’s behaviour combined with a policy implementation background. • We found that the subsidy policy didn’t change people’s purchase intentions. • Past purchasing experiences significantly influence consumers’ purchase intentions. • We proposed policy advices about the time, types and methods of incentive policies.

  10. High-efficiency high-energy Ka source for the critically-required maximum illumination of x-ray optics on Z using Z-petawatt-driven laser-breakout-afterburner accelerated ultrarelativistic electrons LDRD .

    Energy Technology Data Exchange (ETDEWEB)

    Sefkow, Adam B.; Bennett, Guy R.

    2010-09-01

    Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: as the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.

  11. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  12. Impacts of multiple global environmental changes on African crop yield and water use efficiency: Implications to food and water security

    Science.gov (United States)

    Pan, S.; Yang, J.; Zhang, J.; Xu, R.; Dangal, S. R. S.; Zhang, B.; Tian, H.

    2016-12-01

    Africa is one of the most vulnerable regions in the world to climate change and climate variability. Much concern has been raised about the impacts of climate and other environmental factors on water resource and food security through the climate-water-food nexus. Understanding the responses of crop yield and water use efficiency to environmental changes is particularly important because Africa is well known for widespread poverty, slow economic growth and agricultural systems particularly sensitive to frequent and persistent droughts. However, the lack of integrated understanding has limited our ability to quantify and predict the potential of Africa's agricultural sustainability and freshwater supply, and to better manage the system for meeting an increasing food demand in a way that is socially and environmentally or ecologically sustainable. By using the Dynamic Land Ecosystem Model (DLEM-AG2) driven by spatially-explicit information on land use, climate and other environmental changes, we have assessed the spatial and temporal patterns of crop yield, evapotranspiration (ET) and water use efficiency across entire Africa in the past 35 years (1980-2015) and the rest of the 21st century (2016-2099). Our preliminary results indicate that African crop yield in the past three decades shows an increasing trend primarily due to cropland expansion (about 50%), elevated atmospheric CO2 concentration, and nitrogen deposition. However, crop yield shows substantially spatial and temporal variation due to inter-annual and inter-decadal climate variability and spatial heterogeneity of environmental drivers. Climate extremes especially droughts and heat wave have largely reduced crop yield in the most vulnerable regions. Our results indicate that N fertilizer could be a major driver to improve food security in Africa. Future climate warming could reduce crop yield and shift cropland distribution. Our study further suggests that improving water use efficiency through land

  13. Power loss and right ventricular efficiency in patients after tetralogy of Fallot repair with pulmonary insufficiency: clinical implications.

    Science.gov (United States)

    Fogel, Mark A; Sundareswaran, Kartik S; de Zelicourt, Diane; Dasi, Lakshmi P; Pawlowski, Tom; Rome, Jack; Yoganathan, Ajit P

    2012-06-01

    To quantify right ventricular output power and efficiency and correlate these to ventricular function in patients with repaired tetralogy of Fallot. This might aid in determining the optimal timing for pulmonary valve replacement. We reviewed the cardiac catheterization and magnetic resonance imaging data of 13 patients with tetralogy of Fallot (age, 22 ± 17 years). Using pressure and flow measurements in the main pulmonary artery, cardiac output and regurgitation fraction, right ventricular (RV) power output, loss, and efficiency were calculated. The RV function was evaluated using cardiac magnetic resonance imaging. The RV systolic power was 1.08 ± 0.62 W, with 20.3% ± 8.6% power loss owing to 41% ± 14% pulmonary regurgitation (efficiency, 79.7% ± 8.6%; 0.84 ± 0.73 W), resulting in a net cardiac output of 4.24 ± 1.82 L/min. Power loss correlated significantly with the indexed RV end-diastolic and end-systolic volume (R = 0.78, P = .002 and R = 0.69, P = .009, respectively). The normalized RV power output had a significant negative correlation with RV end-diastolic and end-systolic volumes (both R = -0.87, P = .002 and R = -0.68, P = .023, respectively). A rapid decrease occurred in the RV power capacity with an increasing RV volume, with the curve flattening out at an indexed RV end-diastolic and end-systolic volume threshold of 139 mL/m(2) and 75 mL/m(2), respectively. Significant power loss is present in patients with repaired tetralogy of Fallot and pulmonary regurgitation. A rapid decrease in efficiency occurs with increasing RV volume, suggesting that pulmonary valve replacement should be done before the critical value of 139 mL/m(2) and 75 mL/m(2) for the RV end-diastolic and end-systolic volume, respectively, to preserve RV function. Copyright © 2012 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  14. Swimming strategy and body plan of the world’s largest fish: implications for foraging efficiency and thermoregulation

    Directory of Open Access Journals (Sweden)

    Mark eMeekan

    2015-09-01

    Full Text Available The largest animals in the oceans eat prey that are orders of magnitude smaller than themselves, implying strong selection for cost-effective foraging to meet their energy demands. Whale sharks (Rhincodon typus may be especially challenged by warm seas that elevate their metabolism and contain sparse prey resources. Using a combination of biologging and satellite tagging, we show that whale sharks use four strategies to save energy and improve foraging efficiency: 1 fixed, low power swimming, 2 constant low speed swimming, 3 gliding and 4 asymmetrical diving. These strategies increase foraging efficiency by 22 – 32% relative to swimming horizontally and resolve the energy-budget paradox of whale sharks. However, sharks in the open ocean must access food resources that reside in relatively cold waters (up to 20oC cooler than the surface at depths of 250-500 m during the daytime, where long, slow gliding descents, continuous ram ventilation of the gills and filter-feeding could rapidly cool the circulating blood and body tissues. We suggest that whale sharks may overcome this problem through their large size and a specialized body plan that isolates highly vascularized red muscle on the dorsal surface, allowing heat to be retained near the centre of the body within a massive core of white muscle. This could allow a warm-adapted species to maintain enhanced function of organs and sensory systems while exploiting food resources in deep, cool water.

  15. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    Science.gov (United States)

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  16. Interdependence in decision-making by medical consultants: implications for improving the efficiency of inpatient physician services.

    Science.gov (United States)

    Wilk, Adam S; Chen, Lena M

    2017-12-01

    Hospital administrators are seeking to improve efficiency in medical consultation services, yet whether consultants make decisions to provide more or less care is unknown. We examined how medical consultants account for prior consultants' care when determining whether to provide intensive consulting care or sign off in the treatment of complex surgical inpatients. We applied three distinct theoretical frameworks in the interpretation of our results. We performed a retrospective cohort study of consultants' care intensity, measured alternately using a dummy variable for providing two or more days consulting (versus one) and a continuous measure of total days consulting, with 100% Medicare claims data from 2007-2010. Our analytic samples included consults for beneficiaries undergoing coronary artery bypass grafting (n = 61,785) or colectomy (n = 33,460) in general acute care hospitals. We compared the care intensity of consultants who observed different patterns of consulting care before their initial consults using ordinary least squares regression models at the patient-physician dyad level, controlling for patient comorbidity and many other patient- and physician-level factors as well as hospital region and year fixed effects. Consultants were less likely to provide intensive consulting care with each additional prior consultant on the case (1.2-1.7 percent) or if a prior consultant rendered intensive consulting care (20.6-21.5 percent) but more likely when prior consults were more concentrated across consultants (2.9-3.1 percent). Effects on consultants' total days consulting were similar. On average, consultants appeared to calibrate their care intensity for individual patients to maximize their value to all patients. Interventions for improving consulting care efficiency should seek to facilitate (not constrain) consultants' decision-making processes.

  17. Assessment of systems for paying health care providers in Vietnam: implications for equity, efficiency and expanding effective health coverage.

    Science.gov (United States)

    Phuong, Nguyen Khanh; Oanh, Tran Thi Mai; Phuong, Hoang Thi; Tien, Tran Van; Cashin, Cheryl

    2015-01-01

    Provider payment arrangements are currently a core concern for Vietnam's health sector and a key lever for expanding effective coverage and improving the efficiency and equity of the health system. This study describes how different provider payment systems are designed and implemented in practice across a sample of provinces and districts in Vietnam. Key informant interviews were conducted with over 100 health policy-makers, purchasers and providers using a structured interview guide. The results of the different payment methods were scored by respondents and assessed against a set of health system performance criteria. Overall, the public health insurance agency, Vietnam Social Security (VSS), is focused on managing expenditures through a complicated set of reimbursement policies and caps, but the incentives for providers are unclear and do not consistently support Vietnam's health system objectives. The results of this study are being used by the Ministry of Health and VSS to reform the provider payment systems to be more consistent with international definitions and good practices and to better support Vietnam's health system objectives.

  18. Mesophyll conductance in Zea mays responds transiently to CO2 availability: implications for transpiration efficiency in C4 crops.

    Science.gov (United States)

    Kolbe, Allison R; Cousins, Asaph B

    2018-03-01

    Mesophyll conductance (g m ) describes the movement of CO 2 from the intercellular air spaces below the stomata to the site of initial carboxylation in the mesophyll. In contrast with C 3 -g m , little is currently known about the intraspecific variation in C 4 -g m or its responsiveness to environmental stimuli. To address these questions, g m was measured on five maize (Zea mays) lines in response to CO 2 , employing three different estimates of g m . Each of the methods indicated a significant response of g m to CO 2 . Estimates of g m were similar between methods at ambient and higher CO 2 , but diverged significantly at low partial pressures of CO 2 . These differences are probably driven by incomplete chemical and isotopic equilibrium between CO 2 and bicarbonate under these conditions. Carbonic anhydrase and phosphoenolpyruvate carboxylase in vitro activity varied significantly despite similar values of g m and leaf anatomical traits. These results provide strong support for a CO 2 response of g m in Z. mays, and indicate that g m in maize is probably driven by anatomical constraints rather than by biochemical limitations. The CO 2 response of g m indicates a potential role for facilitated diffusion in C 4 -g m . These results also suggest that water-use efficiency could be enhanced in C 4 species by targeting g m . © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  19. How often should we monitor for reliable detection of atrial fibrillation recurrence? Efficiency considerations and implications for study design.

    Directory of Open Access Journals (Sweden)

    Efstratios I Charitos

    Full Text Available Although atrial fibrillation (AF recurrence is unpredictable in terms of onset and duration, current intermittent rhythm monitoring (IRM diagnostic modalities are short-termed and discontinuous. The aim of the present study was to investigate the necessary IRM frequency required to reliably detect recurrence of various AF recurrence patterns.The rhythm histories of 647 patients (mean AF burden: 12 ± 22% of monitored time; 687 patient-years with implantable continuous monitoring devices were reconstructed and analyzed. With the use of computationally intensive simulation, we evaluated the necessary IRM frequency to reliably detect AF recurrence of various AF phenotypes using IRM of various durations.The IRM frequency required for reliable AF detection depends on the amount and temporal aggregation of the AF recurrence (p95% sensitivity of AF recurrence required higher IRM frequencies (>12 24-hour; >6 7-day; >4 14-day; >3 30-day IRM per year; p<0.0001 than currently recommended. Lower IRM frequencies will under-detect AF recurrence and introduce significant bias in the evaluation of therapeutic interventions. More frequent but of shorter duration, IRMs (24-hour are significantly more time effective (sensitivity per monitored time than a fewer number of longer IRM durations (p<0.0001.Reliable AF recurrence detection requires higher IRM frequencies than currently recommended. Current IRM frequency recommendations will fail to diagnose a significant proportion of patients. Shorter duration but more frequent IRM strategies are significantly more efficient than longer IRM durations.Unique identifier: NCT00806689.

  20. Experimental and theoretical investigations about the vaporization of laser-produced aerosols and individual particles inside inductively-coupled plasmas — Implications for the extraction efficiency of ions prior to mass spectrometry

    International Nuclear Information System (INIS)

    Flamigni, Luca; Koch, Joachim; Günther, Detlef

    2012-01-01

    Current quantification capabilities of laser ablation inductively-coupled plasma mass spectrometry (LA-ICP-MS) are known to be restricted by elemental fractionation as a result of LA-, transport-, and ICP-induced effects which, particularly, may provoke inaccuracies whenever calibration strategies on the basis of non-matrix matched standard materials are applied. The present study is dealing with the role of ICP in this complex scenario. Therefore, the vaporization process of laser-produced aerosols and subsequent diffusion losses occurring inside ICP sources were investigated using 2-D optical emission spectrometry (OES) and ICP-quadrupole (Q)MS of individual particles. For instance, Na- and Ca-specific OES of aerosols produced by LA of silicate glasses or metals revealed axial shifts in the onset and maximum position of atomic emission which were in the range of a few millimeters. The occurrence of these shifts was found to arise from composition-dependent particle/aerosol penetration depths, i.e. the displacement of axial vaporization starting points controlling the ion extraction efficiency through the ICP-MS vacuum interface due to a delayed, diffusion-driven expansion of oxidic vs. metallic aerosols. Furthermore, ICP-QMS of individual particles resulted in 1/e half-value signal durations of approximately 100 μs, which complies with modeled values if OES maxima are assumed to coincide with positions of instantaneous vaporization and starting points for atomic diffusion. To prove phenomena observed for their consistency, in addition, “ab initio” as well as semi-empirical simulations of particle/aerosol penetration depths followed by diffusion-driven expansion was accomplished indicating differences of up to 15% in the relative ion extraction efficiency depending on whether analytes are supplied as metals or oxides. Implications of these findings on the accuracy achievable by state-of-the-art LA-ICP-MS systems are outlined. - Highlights: ► Specification

  1. Location of core diagnostic information across various sequences in brain MRI and implications for efficiency of MRI scanner utilization.

    Science.gov (United States)

    Sharma, Aseem; Chatterjee, Arindam; Goyal, Manu; Parsons, Matthew S; Bartel, Seth

    2015-04-01

    Targeting redundancy within MRI can improve its cost-effective utilization. We sought to quantify potential redundancy in our brain MRI protocols. In this retrospective review, we aggregated 207 consecutive adults who underwent brain MRI and reviewed their medical records to document clinical indication, core diagnostic information provided by MRI, and its clinical impact. Contributory imaging abnormalities constituted positive core diagnostic information whereas absence of imaging abnormalities constituted negative core diagnostic information. The senior author selected core sequences deemed sufficient for extraction of core diagnostic information. For validating core sequences selection, four readers assessed the relative ease of extracting core diagnostic information from the core sequences. Potential redundancy was calculated by comparing the average number of core sequences to the average number of sequences obtained. Scanning had been performed using 9.4±2.8 sequences over 37.3±12.3 minutes. Core diagnostic information was deemed extractable from 2.1±1.1 core sequences, with an assumed scanning time of 8.6±4.8 minutes, reflecting a potential redundancy of 74.5%±19.1%. Potential redundancy was least in scans obtained for treatment planning (14.9%±25.7%) and highest in scans obtained for follow-up of benign diseases (81.4%±12.6%). In 97.4% of cases, all four readers considered core diagnostic information to be either easily extractable from core sequences or the ease to be equivalent to that from the entire study. With only one MRI lacking clinical impact (0.48%), overutilization did not seem to contribute to potential redundancy. High potential redundancy that can be targeted for more efficient scanner utilization exists in brain MRI protocols.

  2. CO2 and its correlation with CO at a rural site near Beijing: implications for combustion efficiency in China

    Directory of Open Access Journals (Sweden)

    H. Ma

    2010-09-01

    Full Text Available Although China has surpassed the United States as the world's largest carbon dioxide emitter, in situ measurements of atmospheric CO2 have been sparse in China. This paper analyzes hourly CO2 and its correlation with CO at Miyun, a rural site near Beijing, over a period of 51 months (Dec 2004 through Feb 2009. The CO2-CO correlation analysis evaluated separately for each hour of the day provides useful information with statistical significance even in the growing season. We found that the intercept, representing the initial condition imposed by global distribution of CO2 with influence of photosynthesis and respiration, exhibits diurnal cycles differing by season. The background CO2 (CO2,b derived from Miyun observations is comparable to CO2 observed at a Mongolian background station to the northwest. Annual growth of overall mean CO2 at Miyun is estimated at 2.7 ppm yr−1 while that of CO2,b is only 1.7 ppm yr−1 similar to the mean growth rate at northern mid-latitude background stations. This suggests a relatively faster increase in the regional CO2 sources in China than the global average, consistent with bottom-up studies of CO2 emissions. For air masses with trajectories through the northern China boundary layer, mean winter CO2/CO correlation slopes (dCO2/dCO increased by 2.8 ± 0.9 ppmv/ppmv or 11% from 2005–2006 to 2007–2008, with CO2 increasing by 1.8 ppmv. The increase in dCO2/dCO indicates improvement in overall combustion efficiency over northern China after winter 2007, attributed to pollution reduction measures associated with the 2008 Beijing Olympics. The observed CO2/CO ratio at Miyun is 25% higher than the bottom-up CO2/CO emission ratio, suggesting a contribution of respired CO2 from urban residents as well as agricultural soils and livestock in the observations and uncertainty in the emission estimates.

  3. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  4. Unstructured meshing and parameter estimation for urban dam-break flood modeling: building treatments and implications for accuracy and efficiency

    Science.gov (United States)

    Schubert, J. E.; Sanders, B. F.

    2011-12-01

    Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages

  5. maximum conversion efficiency of thermionic heat to electricity

    African Journals Online (AJOL)

    DJFLEX

    Dushman constant ... Several attempts on the direct conversion of heat to electricity ... The net current density in the system is equal to jE – jC , which gets over the potential barrier. jE and jC are given by the Richardson-. Dushman equation as. │. ⌋.

  6. Maximum herd efficiency in meat production I. Optima for slaughter ...

    African Journals Online (AJOL)

    Optimal replacement involves either the minimum or maximumrate that can be achieved, and depends on the relative costs and output involved in the keeping of different age classes of reproduction animals. Finally, the relationship between replacement rate and herd age structure is explained. Die winsverhoudingby ...

  7. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  8. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  9. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  10. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  11. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  12. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  13. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  14. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  15. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  16. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  17. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  18. Juggling Efficiency

    DEFF Research Database (Denmark)

    Andersen, Rikke Sand; Vedsted, Peter

    2015-01-01

    on institutional logics, we illustrate how a logic of efficiency organise and give shape to healthcare seeking practices as they manifest in local clinical settings. Overall, patient concerns are reconfigured to fit the local clinical setting and healthcare professionals and patients are required to juggle...... efficiency in order to deal with uncertainties and meet more complex or unpredictable needs. Lastly, building on the empirical case of cancer diagnostics, we discuss the implications of the pervasiveness of the logic of efficiency in the clinical setting and argue that provision of medical care in today......'s primary care settings requires careful balancing of increasing demands of efficiency, greater complexity of biomedical knowledge and consideration for individual patient needs....

  19. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  20. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  1. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  2. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  3. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  4. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  5. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  6. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  7. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  8. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  9. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  10. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  11. Fuel demand and fuel efficiency in the US commercial-airline industry and the trucking industry: an analysis of trends and implications. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-03-31

    A study of trends in fuel use and efficiency in the US commercial airlines industry is extended back to 1967 in order to compare the relative contributions of the factors influencing efficiency during a period of stable fuel prices (1967 to 1972) versus a period of fuel price growth (1973 to 1980). A similar analysis disaggregates the components of truck efficiency and evaluates their relative impact on fuel consumption in the trucking industry. (LEW)

  12. Maximum super angle optimization method for array antenna pattern synthesis

    DEFF Research Database (Denmark)

    Wu, Ji; Roederer, A. G

    1991-01-01

    Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...

  13. A conrparison of optirnunl and maximum reproduction using the rat ...

    African Journals Online (AJOL)

    of pigs to increase reproduction rate of sows (te Brake,. 1978; Walker et at., 1979; Kemm et at., 1980). However, no experimental evidence exists that this strategy would in fact improve biological efficiency. In this pilot experiment, an attempt was made to compare systems of optimum or maximum reproduction using the rat.

  14. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  15. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  16. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  17. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  18. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  19. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  20. Maximum Power Tracking by VSAS approach for Wind Turbine, Renewable Energy Sources

    Directory of Open Access Journals (Sweden)

    Nacer Kouider Msirdi

    2015-08-01

    Full Text Available This paper gives a review of the most efficient algorithms designed to track the maximum power point (MPP for catching the maximum wind power by a variable speed wind turbine (VSWT. We then design a new maximum power point tracking (MPPT algorithm using the Variable Structure Automatic Systems approach (VSAS. The proposed approachleads efficient algorithms as shown in this paper by the analysis and simulations.

  1. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  2. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  3. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  4. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  5. OZONE PRODUCTION EFFICIENCY AND NOX DEPLETION IN AN URBAN PLUME: INTERPRETATION OF FIELD OBSERVATIONS AND IMPLICATIONS FOR EVALUATING O3-NOX-VOC SENSITIVITY

    Science.gov (United States)

    Ozone production efficiency (OPE) can be defined as the number of ozone (O3) molecules photochemically produced by a molecule of NOx (NO + NO2) before it is lost from the NOx - O3 cycle. Here, we consider observational and modeling techniques to evaluate various operational defi...

  6. Trade-offs in parasitism efficiency and brood size mediate parasitoid coexistence, with implications for biological control of the invasive emerald ash borer

    Science.gov (United States)

    Parasitoids often are selected for use as biological control agents because of their high host specificity, yet such host specificity can result in strong interspecific competition. However, few studies have examined if and how various extrinsic factors (such as parasitism efficiency) influence the ...

  7. Resource-use efficiencies of three indigenous tree species planted in resource islands created by shrubs: implications for reforestation of subtropical degraded shrublands

    Science.gov (United States)

    Nan Liu; Qinfeng Guo

    2012-01-01

    Shrub resource islands are characterized by resources accumulated shrubby areas surrounded by relative barren soils. This research aims to determine resource-use efficiency of native trees species planted on shrub resource islands, and to determine how the planted trees may influence the resource islands in degraded shrublands in South China. Shrub (Rhodomyrtus...

  8. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  9. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  10. MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.

  11. Roles of nitric oxide, nitrite and myoglobin on myocardial efficiency in trout (Oncorthynchus mykiss) and goldfish (Carassius auratus): implications for hypoxia tolerance

    DEFF Research Database (Denmark)

    Pedersen, Claus Lunde; Faggiano, Serena; Helbo, Signe

    2010-01-01

    The roles of nitric oxide synthase activity (NOS), nitrite and myoglobin (Mb) in the regulation of myocardial function during hypoxia were examined in trout and goldfish, a hypoxia-intolerant and hypoxia-tolerant species, respectively. We measured the effect of NOS inhibition, adrenaline and nitr......The roles of nitric oxide synthase activity (NOS), nitrite and myoglobin (Mb) in the regulation of myocardial function during hypoxia were examined in trout and goldfish, a hypoxia-intolerant and hypoxia-tolerant species, respectively. We measured the effect of NOS inhibition, adrenaline...... in both trout and goldfish myocardium, with trout showing a significant increase in the O2 utilization efficiency, i.e. the ratio of twitch force to O2 consumption, suggesting an increased anaerobic metabolism. NOS inhibition enhanced myocardial O2 consumption and decreased efficiency, indicating...... that mitochondrial respiration is under a tone of NOS-produced NO. When trout myocardial twitch force and O2 consumption are enhanced by adrenaline, this NO tone disappears. Consistent with its conversion to NO, nitrite reduced O2 consumption and increased myocardial efficiency in trout but not in goldfish...

  12. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  13. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  14. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  15. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  16. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  17. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  18. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  19. Microprocessor-controlled step-down maximum-power-point tracker for photovoltaic systems

    Science.gov (United States)

    Mazmuder, R. K.; Haidar, S.

    1992-12-01

    An efficient maximum power point tracker (MPPT) has been developed and can be used with a photovoltaic (PV) array and a load which requires lower voltage than the PV array voltage to be operated. The MPPT makes the PV array to operate at maximum power point (MPP) under all insolation and temperature, which ensures the maximum amount of available PV power to be delivered to the load. The performance of the MPPT has been studied under different insolation levels.

  20. Maximum Work of Free-Piston Stirling Engine Generators

    Science.gov (United States)

    Kojima, Shinji

    2017-04-01

    Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

  1. The Betz-Joukowsky limit for the maximum power coefficient of wind turbines

    DEFF Research Database (Denmark)

    Okulov, Valery; van Kuik, G.A.M.

    2009-01-01

    The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...

  2. Measurement of the efficient cross section of the reaction 7Be(p, γ)8B at low energies and implications in the problem of solar neutrinos

    International Nuclear Information System (INIS)

    Hammache, Fairouz

    1999-01-01

    The 8 B produced inside the sun through the reaction 7 Be(p,γ) 8 B is the main, and even unique, source of high energy neutrinos detected in most solar neutrino detection experiments, except with Gallex and Sage. These experiments have all measured a neutrinos flux lower than the one predicted by solar models. Several explanations have been proposed to explain this deficit, but all require a precise knowledge of the efficient cross-section of the reaction 7 Be(p,γ) 8 B, because the neutrinos flux of 8 B is directly proportional to this reaction. The direct measurement of this cross section for the solar energy is impossible because of its low value (about 1 femto-barn). In order to get round this problem, the cross sections are measured at higher energy and extrapolated to the solar energy using a theoretical energy dependence. The 6 previous experimental determinations of the efficient cross section were shared in two distinct groups with differences of about 30% which leads to an uncertainty of the same order on the high energy neutrinos flux. The re-measurement of the cross section of this reaction with a better precision is thus of prime importance. A direct measurement of the cross section in the energy range comprised between 0.35 and 1.4 MeV (cm) has been performed first. These experiments have permitted the precise measurement of each parameter involved in the determination of the cross section. Then, measurements of the cross section have been carried out with the PAPAP accelerator at 185.8, 134.7 and 111.7 keV, the lowest mass center energy never reached before. The results are in excellent agreement with those obtained at higher energies. The value obtained by extrapolation of these data for the astrophysical factor S 17 (0) is 19.21.3 EV-B, which leads to a significant reduction of the uncertainty on the high energy neutrinos flux of 8 B. (J.S.)

  3. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  4. A performance analysis for MHD power cycles operating at maximum power density

    International Nuclear Information System (INIS)

    Sahin, Bahri; Kodal, Ali; Yavuz, Hasbi

    1996-01-01

    An analysis of the thermal efficiency of a magnetohydrodynamic (MHD) power cycle at maximum power density for a constant velocity type MHD generator has been carried out. The irreversibilities at the compressor and the MHD generator are taken into account. The results obtained from power density analysis were compared with those of maximum power analysis. It is shown that by using the power density criteria the MHD cycle efficiency can be increased effectively. (author)

  5. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  6. Cooperative binding of anti-tetanus toxin monoclonal antibodies: Implications for designing an efficient biclonal preparation to prevent tetanus toxin intoxication.

    Science.gov (United States)

    Lukic, Ivana; Filipovic, Ana; Inic-Kanada, Aleksandra; Marinkovic, Emilija; Miljkovic, Radmila; Stojanovic, Marijana

    2018-05-15

    Oligoclonal combinations of several monoclonal antibodies (MAbs) are being considered for the treatment of various infectious pathologies. These combinations are less sensitive to antigen structural changes than individual MAbs; at the same time, their characteristics can be more efficiently controlled than those of polyclonal antibodies. The main goal of this study was to evaluate the binding characteristics of six biclonal equimolar preparations (BEP) of tetanus toxin (TeNT)-specific MAbs and to investigate how the MAb combination influences the BEPs' protective capacity. We show that a combination of TeNT-specific MAbs, which not only bind TeNT but also exert positive cooperative effects, results in a BEP with superior binding characteristics and protective capacity, when compared with the individual component MAbs. Furthermore, we show that a MAb with only partial protective capacity but positive effects on the binding of the other BEP component can be used as a valuable constituent of the BEP. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Kinking and Torsion Can Significantly Improve the Efficiency of Valveless Pumping in Periodically Compressed Tubular Conduits. Implications for Understanding of the Form-Function Relationship of Embryonic Heart Tubes

    Directory of Open Access Journals (Sweden)

    Florian Hiermeier

    2017-11-01

    Full Text Available Valveless pumping phenomena (peristalsis, Liebau-effect can generate unidirectional fluid flow in periodically compressed tubular conduits. Early embryonic hearts are tubular conduits acting as valveless pumps. It is unclear whether such hearts work as peristaltic or Liebau-effect pumps. During the initial phase of its pumping activity, the originally straight embryonic heart is subjected to deforming forces that produce bending, twisting, kinking, and coiling. This deformation process is called cardiac looping. Its function is traditionally seen as generating a configuration needed for establishment of correct alignments of pulmonary and systemic flow pathways in the mature heart of lung-breathing vertebrates. This idea conflicts with the fact that cardiac looping occurs in all vertebrates, including gill-breathing fishes. We speculate that looping morphogenesis may improve the efficiency of valveless pumping. To test the physical plausibility of this hypothesis, we analyzed the pumping performance of a Liebau-effect pump in straight and looped (kinked configurations. Compared to the straight configuration, the looped configuration significantly improved the pumping performance of our pump. This shows that looping can improve the efficiency of valveless pumping driven by the Liebau-effect. Further studies are needed to clarify whether this finding may have implications for understanding of the form-function relationship of embryonic hearts.

  8. Aerodynamic Limits on Large Civil Tiltrotor Sizing and Efficiency

    Science.gov (United States)

    Acree, C W.

    2014-01-01

    The NASA Large Civil Tiltrotor (2nd generation, or LCTR2) is a useful reference design for technology impact studies. The present paper takes a broad view of technology assessment by examining the extremes of what aerodynamic improvements might hope to accomplish. Performance was analyzed with aerodynamically idealized rotor, wing, and airframe, representing the physical limits of a large tiltrotor. The analysis was repeated with more realistic assumptions, which revealed that increased maximum rotor lift capability is potentially more effective in improving overall vehicle efficiency than higher rotor or wing efficiency. To balance these purely theoretical studies, some practical limitations on airframe layout are also discussed, along with their implications for wing design. Performance of a less efficient but more practical aircraft with non-tilting nacelles is presented.

  9. Effective Responder Communication Improves Efficiency and Psychological Outcomes in a Mass Decontamination Field Experiment: Implications for Public Behaviour in the Event of a Chemical Incident

    Science.gov (United States)

    Carter, Holly; Drury, John; Amlôt, Richard; Rubin, G. James; Williams, Richard

    2014-01-01

    The risk of incidents involving mass decontamination in response to a chemical, biological, radiological, or nuclear release has increased in recent years, due to technological advances, and the willingness of terrorists to use unconventional weapons. Planning for such incidents has focused on the technical issues involved, rather than on psychosocial concerns. This paper presents a novel experimental study, examining the effect of three different responder communication strategies on public experiences and behaviour during a mass decontamination field experiment. Specifically, the research examined the impact of social identity processes on the relationship between effective responder communication, and relevant outcome variables (e.g. public compliance, public anxiety, and co-operative public behaviour). All participants (n = 111) were asked to visualise that they had been involved in an incident involving mass decontamination, before undergoing the decontamination process, and receiving one of three different communication strategies: 1) ‘Theory-based communication’: Health-focused explanations about decontamination, and sufficient practical information; 2) ‘Standard practice communication’: No health-focused explanations about decontamination, sufficient practical information; 3) ‘Brief communication’: No health-focused explanations about decontamination, insufficient practical information. Four types of data were collected: timings of the decontamination process; observational data; and quantitative and qualitative self-report data. The communication strategy which resulted in the most efficient progression of participants through the decontamination process, as well as the fewest observations of non-compliance and confusion, was that which included both health-focused explanations about decontamination and sufficient practical information. Further, this strategy resulted in increased perceptions of responder legitimacy and increased

  10. Implications of Future Water Use Efficiency for Ecohydrological Responses to Climate Change and Spatial Heterogeneity of Atmospheric CO2 in China

    Directory of Open Access Journals (Sweden)

    Zhen Zhang

    2013-01-01

    Full Text Available As the atmospheric carbon dioxide (CO2 increases substantially, the spatial distribution of atmospheric CO2 should be considered when estimating the effects of CO2 on the carbon and water cycle coupling of terrestrial ecosystems. To evaluate this effect on future ecohydrological processes, the spatial-temporal patterns of CO2 were established over 1951 - 2099 according to the IPCC emission scenarios SRES A2 and SRES B1. Thereafter, water use efficiency (WUE was used (i.e., Net Primary Production/Evaportranspiration as an indicator to quantify the effects of climate change and uneven CO2 fertilization in China. We carried out several simulated experiments to estimate WUE under different future scenarios using a land process model (Integrated Biosphere Simulator, IBIS. Results indicated that the geographical distributions of averaged WUE have considerable differences under a heterogeneous atmospheric CO2 condition. Under the SRES A2 scenario, WUE decreased slightly with a 5% value in most areas of the southeastern and northwestern China during the 2050s, while decreasing by approximately 15% in southeastern China during the 2090s. During the period of the 2050s under SRES B1 scenario, the change rate of WUE was similar with that under SRES A2 scenario, but the WUE has a more moderate decreasing trend than that under the SRES A2 scenario. In all, the ecosystems in median and low latitude areas had a weakened effect on resisting extreme climate event such as drought. Conversely, the vegetation in a boreal forest had an enhanced buffering capability to tolerate drought events.

  11. Effective responder communication improves efficiency and psychological outcomes in a mass decontamination field experiment: implications for public behaviour in the event of a chemical incident.

    Science.gov (United States)

    Carter, Holly; Drury, John; Amlôt, Richard; Rubin, G James; Williams, Richard

    2014-01-01

    The risk of incidents involving mass decontamination in response to a chemical, biological, radiological, or nuclear release has increased in recent years, due to technological advances, and the willingness of terrorists to use unconventional weapons. Planning for such incidents has focused on the technical issues involved, rather than on psychosocial concerns. This paper presents a novel experimental study, examining the effect of three different responder communication strategies on public experiences and behaviour during a mass decontamination field experiment. Specifically, the research examined the impact of social identity processes on the relationship between effective responder communication, and relevant outcome variables (e.g. public compliance, public anxiety, and co-operative public behaviour). All participants (n = 111) were asked to visualise that they had been involved in an incident involving mass decontamination, before undergoing the decontamination process, and receiving one of three different communication strategies: 1) 'Theory-based communication': Health-focused explanations about decontamination, and sufficient practical information; 2) 'Standard practice communication': No health-focused explanations about decontamination, sufficient practical information; 3) 'Brief communication': No health-focused explanations about decontamination, insufficient practical information. Four types of data were collected: timings of the decontamination process; observational data; and quantitative and qualitative self-report data. The communication strategy which resulted in the most efficient progression of participants through the decontamination process, as well as the fewest observations of non-compliance and confusion, was that which included both health-focused explanations about decontamination and sufficient practical information. Further, this strategy resulted in increased perceptions of responder legitimacy and increased identification with

  12. Multiple policies to enhance prescribing efficiency for established medicines in Europe with a particular focus on demand-side measures: findings and future implications

    Directory of Open Access Journals (Sweden)

    Brian eGodman

    2014-06-01

    Full Text Available Introduction: The appreciable growth in pharmaceutical expenditure has resulted in multiple initiatives across Europe to lower generic prices and enhance their utilisation. However, considerable variation in their use and prices. Objective: Assess the influence of multiple supply and demand-side initiatives across Europe for established medicines to enhance prescribing efficiency before a decision to prescribe a particular medicine. Subsequently utilise the findings to suggest potential future initiatives that countries could consider. Method: Analysis of different methodologies involving cross national and single country retrospective observational studies on reimbursed use and expenditure of PPIs, statins and renin-angiotensin inhibitor drugs among European countries. Results: Nature and intensity of the various initiatives appreciably influenced prescribing behaviour and expenditure, e.g. multiple measures resulted in reimbursed expenditure for PPIs in Scotland in 2010 56% below 2001 levels despite a 3 fold increase in utilisation and in the Netherlands, PPI expenditure fell by 58% in 2010 vs. 2000 despite a 3-fold increase in utilisation. A similarly picture was seen with prescribing restrictions, i.e. (i more aggressive follow-up of prescribing restrictions for patented statins and ARBs resulted in a greater reduction in the utilisation of patented statins in Austria vs. Norway and lower utilisation of patented ARBs vs. generic ACEIs in Croatia than Austria. However, limited impact of restrictions on esomeprazole in Norway with the first prescription or recommendation in hospital where restrictions do not apply. Similar findings when generic losartan became available in Western Europe. Conclusions: Multiple demand-side measures are needed to influence prescribing patterns. When combined with supply-side measures, activities can realise appreciable savings. Health authorities cannot rely on a ‘spill over’ effect between classes to affect

  13. Effective responder communication improves efficiency and psychological outcomes in a mass decontamination field experiment: implications for public behaviour in the event of a chemical incident.

    Directory of Open Access Journals (Sweden)

    Holly Carter

    Full Text Available The risk of incidents involving mass decontamination in response to a chemical, biological, radiological, or nuclear release has increased in recent years, due to technological advances, and the willingness of terrorists to use unconventional weapons. Planning for such incidents has focused on the technical issues involved, rather than on psychosocial concerns. This paper presents a novel experimental study, examining the effect of three different responder communication strategies on public experiences and behaviour during a mass decontamination field experiment. Specifically, the research examined the impact of social identity processes on the relationship between effective responder communication, and relevant outcome variables (e.g. public compliance, public anxiety, and co-operative public behaviour. All participants (n = 111 were asked to visualise that they had been involved in an incident involving mass decontamination, before undergoing the decontamination process, and receiving one of three different communication strategies: 1 'Theory-based communication': Health-focused explanations about decontamination, and sufficient practical information; 2 'Standard practice communication': No health-focused explanations about decontamination, sufficient practical information; 3 'Brief communication': No health-focused explanations about decontamination, insufficient practical information. Four types of data were collected: timings of the decontamination process; observational data; and quantitative and qualitative self-report data. The communication strategy which resulted in the most efficient progression of participants through the decontamination process, as well as the fewest observations of non-compliance and confusion, was that which included both health-focused explanations about decontamination and sufficient practical information. Further, this strategy resulted in increased perceptions of responder legitimacy and increased

  14. Novel TPPO Based Maximum Power Point Method for Photovoltaic System

    Directory of Open Access Journals (Sweden)

    ABBASI, M. A.

    2017-08-01

    Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.

  15. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  16. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  17. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  18. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  19. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  20. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  1. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  2. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  3. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  4. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.

  5. Implicative Algebras

    African Journals Online (AJOL)

    Tadesse

    In this paper we introduce the concept of implicative algebras which is an equivalent definition of lattice implication algebra of Xu (1993) and further we prove that it is a regular Autometrized. Algebra. Further we remark that the binary operation → on lattice implicative algebra can never be associative. Key words: Implicative ...

  6. Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.

  7. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  8. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  9. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  10. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  11. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  12. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  13. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  14. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  15. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  16. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  17. Maximum-performance fiber-optic irradiation with nonimaging designs.

    Science.gov (United States)

    Fang, Y; Feuermann, D; Gordon, J M

    1997-10-01

    A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.

  18. Mothers' Maximum Drinks Ever Consumed in 24 Hours Predicts Mental Health Problems in Adolescent Offspring

    Science.gov (United States)

    Malone, Stephen M.; McGue, Matt; Iacono, William G.

    2010-01-01

    Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…

  19. Exact parallel maximum clique algorithm for general and protein graphs.

    Science.gov (United States)

    Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka

    2013-09-23

    A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.

  20. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  1. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  2. Noise and physical limits to maximum resolution of PET images

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2007-10-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.

  3. Noise and physical limits to maximum resolution of PET images

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2007-01-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners

  4. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  5. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  6. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  7. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  8. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  9. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  10. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  11. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  12. Higher renewable energy integration into the existing energy system of Finland – Is there any maximum limit?

    International Nuclear Information System (INIS)

    Zakeri, Behnam; Syri, Sanna; Rinne, Samuli

    2015-01-01

    Finland is to increase the share of RES (renewable energy sources) up to 38% in final energy consumption by 2020. While benefiting from local biomass resources Finnish energy system is deemed to achieve this goal, increasing the share of other intermittent renewables is under development, namely wind power and solar energy. Yet the maximum flexibility of the existing energy system in integration of renewable energy is not investigated, which is an important step before undertaking new renewable energy obligations. This study aims at filling this gap by hourly analysis and comprehensive modeling of the energy system including electricity, heat, and transportation, by employing EnergyPLAN tool. Focusing on technical and economic implications, we assess the maximum potential of different RESs separately (including bioenergy, hydropower, wind power, solar heating and PV, and heat pumps), as well as an optimal mix of different technologies. Furthermore, we propose a new index for assessing the maximum flexibility of energy systems in absorbing variable renewable energy. The results demonstrate that wind energy can be harvested at maximum levels of 18–19% of annual power demand (approx. 16 TWh/a), without major enhancements in the flexibility of energy infrastructure. With today's energy demand, the maximum feasible renewable energy for Finland is around 44–50% by an optimal mix of different technologies, which promises 35% reduction in carbon emissions from 2012's level. Moreover, Finnish energy system is flexible to augment the share of renewables in gross electricity consumption up to 69–72%, at maximum. Higher shares of RES calls for lower energy consumption (energy efficiency) and more flexibility in balancing energy supply and consumption (e.g. by energy storage). - Highlights: • By hourly analysis, we model the whole energy system of Finland. • With existing energy infrastructure, RES (renewable energy sources) in primary energy cannot go beyond 50%.

  13. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  14. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  15. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  16. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  17. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  18. Comprehensive performance analyses and optimization of the irreversible thermodynamic cycle engines (TCE) under maximum power (MP) and maximum power density (MPD) conditions

    International Nuclear Information System (INIS)

    Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan

    2015-01-01

    This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

  19. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  20. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  1. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  2. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  3. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  4. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  5. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  6. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  7. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  8. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  9. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  10. Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Modestas Pikutis

    2014-05-01

    Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.

  11. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  12. Flow Control in Wells Turbines for Harnessing Maximum Wave Power

    Science.gov (United States)

    Garrido, Aitor J.; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-01-01

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness. PMID:29439408

  13. Flow Control in Wells Turbines for Harnessing Maximum Wave Power.

    Science.gov (United States)

    Lekube, Jon; Garrido, Aitor J; Garrido, Izaskun; Otaola, Erlantz; Maseda, Javier

    2018-02-10

    Oceans, and particularly waves, offer a huge potential for energy harnessing all over the world. Nevertheless, the performance of current energy converters does not yet allow us to use the wave energy efficiently. However, new control techniques can improve the efficiency of energy converters. In this sense, the plant sensors play a key role within the control scheme, as necessary tools for parameter measuring and monitoring that are then used as control input variables to the feedback loop. Therefore, the aim of this work is to manage the rotational speed control loop in order to optimize the output power. With the help of outward looking sensors, a Maximum Power Point Tracking (MPPT) technique is employed to maximize the system efficiency. Then, the control decisions are based on the pressure drop measured by pressure sensors located along the turbine. A complete wave-to-wire model is developed so as to validate the performance of the proposed control method. For this purpose, a novel sensor-based flow controller is implemented based on the different measured signals. Thus, the performance of the proposed controller has been analyzed and compared with a case of uncontrolled plant. The simulations demonstrate that the flow control-based MPPT strategy is able to increase the output power, and they confirm both the viability and goodness.

  14. Stochastic efficiency: five case studies

    International Nuclear Information System (INIS)

    Proesmans, Karel; Broeck, Christian Van den

    2015-01-01

    Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)

  15. A novel maximum power point tracking method for PV systems using fuzzy cognitive networks (FCN)

    Energy Technology Data Exchange (ETDEWEB)

    Karlis, A.D. [Electrical Machines Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece); Kottas, T.L.; Boutalis, Y.S. [Automatic Control Systems Laboratory, Department of Electrical & amp; Computer Engineering, Democritus University of Thrace, V. Sofias 12, 67100 Xanthi (Greece)

    2007-03-15

    Maximum power point trackers (MPPTs) play an important role in photovoltaic (PV) power systems because they maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency. This paper presents a novel MPPT method based on fuzzy cognitive networks (FCN). The new method gives a good maximum power operation of any PV array under different conditions such as changing insolation and temperature. The numerical results show the effectiveness of the proposed algorithm. (author)

  16. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  17. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  18. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  19. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  20. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  1. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    Science.gov (United States)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  2. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  3. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  4. Maximum margin semi-supervised learning with irrelevant data.

    Science.gov (United States)

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright

  5. Small scale wind energy harvesting with maximum power tracking

    Directory of Open Access Journals (Sweden)

    Joaquim Azevedo

    2015-07-01

    Full Text Available It is well-known that energy harvesting from wind can be used to power remote monitoring systems. There are several studies that use wind energy in small-scale systems, mainly with wind turbine vertical axis. However, there are very few studies with actual implementations of small wind turbines. This paper compares the performance of horizontal and vertical axis wind turbines for energy harvesting on wireless sensor network applications. The problem with the use of wind energy is that most of the time the wind speed is very low, especially at urban areas. Therefore, this work includes a study on the wind speed distribution in an urban environment and proposes a controller to maximize the energy transfer to the storage systems. The generated power is evaluated by simulation and experimentally for different load and wind conditions. The results demonstrate the increase in efficiency of wind generators that use maximum power transfer tracking, even at low wind speeds.

  6. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  7. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-09-01

    Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  8. A maximum power point tracking algorithm for photovoltaic applications

    Science.gov (United States)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  9. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  10. Energy efficiency

    International Nuclear Information System (INIS)

    2010-01-01

    After a speech of the CEA's (Commissariat a l'Energie Atomique) general administrator about energy efficiency as a first rank challenge for the planet and for France, this publications proposes several contributions: a discussion of the efficiency of nuclear energy, an economic analysis of R and D's value in the field of fourth generation fast reactors, discussions about biofuels and the relationship between energy efficiency and economic competitiveness, and a discussion about solar photovoltaic efficiency

  11. Maximum wind energy extraction strategies using power electronic converters

    Science.gov (United States)

    Wang, Quincy Qing

    2003-10-01

    This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through

  12. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  13. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  14. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  15. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  16. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  17. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  18. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  19. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  20. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  1. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  2. Nanocrystalline dye-sensitized solar cells having maximum performance

    Energy Technology Data Exchange (ETDEWEB)

    Kroon, M.; Bakker, N.J.; Smit, H.J.P. [ECN Solar Energy, Petten (Netherlands); Liska, P.; Thampi, K.R.; Wang, P.; Zakeeruddin, S.M.; Graetzel, M. [LPI-ISIC, Ecole Polytechnique Federale de Lausanne EPFL, Station 6, CH-1015 Lausanne (Switzerland); Hinsch, A. [Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr.2, D-79110 Freiburg (Germany); Hore, S.; Wuerfel, U.; Sastrawan, R. [Freiburg Materials Research Centre FMF, Stefan-Meier Str. 21, 79104 Freiburg (Germany); Durrant, J.R.; Palomares, E. [Centre for Electronic Materials and Devices, Department of Chemistry, Imperial College London, Exhibition road SW7 2AY (United Kingdom); Pettersson, H.; Gruszecki, T. [IVF Industrial Research and Development Corporation, Argongatan 30, SE-431 53 Moelndal (Sweden); Walter, J.; Skupien, K. [Cracow University of Technology CUTECH, Jana Pawla II 37, 31-864 Cracow (Poland); Tulloch, G.E. [Greatcell Solar SA GSA, Ave Henry-Warnery 4, 1006 Lausanne (Switzerland)

    2007-01-15

    This paper presents an overview of the research carried out by a European consortium with the aim to develop and test new and improved ways to realise dye-sensitized solar cells (DSC) with enhanced efficiencies and stabilities. Several new areas have been explored in the field of new concepts and materials, fabrication protocols for TiO2 and scatterlayers, metal oxide blocking layers, strategies for co-sensitization and low temperature processes of platinum deposition. Fundamental understanding of the working principles has been gained by means of electrical and optical modelling and advanced characterization techniques. Cost analyses have been made to demonstrate the potential of DSC as a low cost thin film PV technology. The combined efforts have led to maximum non-certified power conversion efficiencies under full sunlight of 11% for areas <0c2 cm{sup 2} and 10c1% for a cell with an active area of 1c3 cm{sup 2}. Lifetime studies revealed negligible device degradation after 1000 hrs of accelerated tests under thermal stress at 80C in the dark and visible light soaking at 60C. An outlook summarizing future directions in the research and large-scale production of DSC is presented.

  3. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  4. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  5. Examination of Maximum Power Point Tracking on the EV for Installing on Windmill

    OpenAIRE

    雪田, 和人; 細江, 忠司; 小田切, 雄也; 後藤, 泰之; 一柳, 勝宏

    2006-01-01

    This paper proposes that wind generator system is operated by using wind collection equipment and Maximum Power Point Tracking more and more high-efficient. As an example of the utility, it was proposed that it was used for the regeneration of electric vehicle. The efficiency upgrading of electric vehicle can be expect by introducing in addition, proposing system with the conventional regeneration. The field experiment was carried out in order to measure the effect. Regeneration energy by pro...

  6. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  7. Assessment of maximum available work of a hydrogen fueled compression ignition engine using exergy analysis

    International Nuclear Information System (INIS)

    Chintala, Venkateswarlu; Subramanian, K.A.

    2014-01-01

    This work is aimed at study of maximum available work and irreversibility (mixing, combustion, unburned, and friction) of a dual-fuel diesel engine (H 2 (hydrogen)–diesel) using exergy analysis. The maximum available work increased with H 2 addition due to reduction in irreversibility of combustion because of less entropy generation. The irreversibility of unburned fuel with the H 2 fuel also decreased due to the engine combustion with high temperature whereas there is no effect of H 2 on mixing and friction irreversibility. The maximum available work of the diesel engine at rated load increased from 29% with conventional base mode (without H 2 ) to 31.7% with dual-fuel mode (18% H 2 energy share) whereas total irreversibility of the engine decreased drastically from 41.2% to 39.3%. The energy efficiency of the engine with H 2 increased about 10% with 36% reduction in CO 2 emission. The developed methodology could also be applicable to find the effect and scope of different technologies including exhaust gas recirculation and turbo charging on maximum available work and energy efficiency of diesel engines. - Highlights: • Energy efficiency of diesel engine increases with hydrogen under dual-fuel mode. • Maximum available work of the engine increases significantly with hydrogen. • Combustion and unburned fuel irreversibility decrease with hydrogen. • No significant effect of hydrogen on mixing and friction irreversibility. • Reduction in CO 2 emission along with HC, CO and smoke emissions

  8. Efficiency Evaluation of Energy Systems

    CERN Document Server

    Kanoğlu, Mehmet; Dinçer, İbrahim

    2012-01-01

    Efficiency is one of the most frequently used terms in thermodynamics, and it indicates how well an energy conversion or process is accomplished. Efficiency is also one of the most frequently misused terms in thermodynamics and is often a source of misunderstanding. This is because efficiency is often used without being properly defined first. This book intends to provide a comprehensive evaluation of various efficiencies used for energy transfer and conversion systems including steady-flow energy devices (turbines, compressors, pumps, nozzles, heat exchangers, etc.), various power plants, cogeneration plants, and refrigeration systems. The book will cover first-law (energy based) and second-law (exergy based) efficiencies and provide a comprehensive understanding of their implications. It will help minimize the widespread misuse of efficiencies among students and researchers in energy field by using an intuitive and unified approach for defining efficiencies. The book will be particularly useful for a clear ...

  9. The maximum contraceptive prevalence 'demand curve': guiding discussions on programmatic investments.

    Science.gov (United States)

    Weinberger, Michelle; Sonneveldt, Emily; Stover, John

    2017-12-22

    Most frameworks for family planning include both access and demand interventions. Understanding how these two are linked and when each should be prioritized is difficult. The maximum contraceptive prevalence 'demand curve' was created based on a relationship between the modern contraceptive prevalence rate (mCPR) and mean ideal number of children to allow for a quantitative assessment of the balance between access and demand interventions. The curve represents the maximum mCPR that is likely to be seen given fertility intentions and related norms and constructs that influence contraceptive use. The gap between a country's mCPR and this maximum is referred to as the 'potential use gap.' This concept can be used by countries to prioritize access investments where the gap is large, and discuss implications for future contraceptive use where the gap is small. It is also used within the FP Goals model to ensure mCPR growth from access interventions does not exceed available demand.

  10. Maximum power point tracking for photovoltaic solar pump based on ANFIS tuning system

    Directory of Open Access Journals (Sweden)

    S. Shabaan

    2018-05-01

    Full Text Available Solar photovoltaic (PV systems are a clean and naturally replenished energy source. PV panels have a unique point which represents the maximum available power and this point depend on the environmental conditions such as temperature and irradiance. A maximum power point tracking (MPPT is therefore necessary for maximum efficiency. In this paper, a study of MPPT for PV water pumping system based on adaptive neuro-fuzzy inference system (ANFIS is discussed. A comparison between the performance of the system with and without MPPT is carried out under varying irradiation and temperature conditions. ANFIS based controller shows fast response with high efficiency at all irradiance and temperature levels making it a powerful technique for non-linear systems as PV modules. Keywords: MPPT, ANFIS, Boost converter, PMDC pump

  11. A maximum power point tracking scheme for a 1kw stand-alone ...

    African Journals Online (AJOL)

    A maximum power point tracking scheme for a 1kw stand-alone solar energy based power supply. ... Nigerian Journal of Technology ... A method for efficiently maximizing the output power of a solar panel supplying a load or battery bus under ...

  12. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  13. The maximum theoretical performance of unconcentrated solar photovoltaic and thermoelectric generator systems

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Nielsen, Kaspar Kirstein

    2017-01-01

    The maximum efficiency for photovoltaic (PV) and thermoelectric generator (TEG) systems without concentration is investigated. Both a combined system where the TEG is mounted directly on the back of the PV and a tandem system where the incoming sunlight is split, and the short wavelength radiation...

  14. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  15. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  16. Reconstructing phylogenetic networks using maximum parsimony.

    Science.gov (United States)

    Nakhleh, Luay; Jin, Guohua; Zhao, Fengmei; Mellor-Crummey, John

    2005-01-01

    Phylogenies - the evolutionary histories of groups of organisms - are one of the most widely used tools throughout the life sciences, as well as objects of research within systematics, evolutionary biology, epidemiology, etc. Almost every tool devised to date to reconstruct phylogenies produces trees; yet it is widely understood and accepted that trees oversimplify the evolutionary histories of many groups of organims, most prominently bacteria (because of horizontal gene transfer) and plants (because of hybrid speciation). Various methods and criteria have been introduced for phylogenetic tree reconstruction. Parsimony is one of the most widely used and studied criteria, and various accurate and efficient heuristics for reconstructing trees based on parsimony have been devised. Jotun Hein suggested a straightforward extension of the parsimony criterion to phylogenetic networks. In this paper we formalize this concept, and provide the first experimental study of the quality of parsimony as a criterion for constructing and evaluating phylogenetic networks. Our results show that, when extended to phylogenetic networks, the parsimony criterion produces promising results. In a great majority of the cases in our experiments, the parsimony criterion accurately predicts the numbers and placements of non-tree events.

  17. PARTICLE SWARM OPTIMIZATION BASED OF THE MAXIMUM PHOTOVOLTAIC POWER TRACTIOQG UNDER DIFFERENT CONDITIONS

    Directory of Open Access Journals (Sweden)

    Y. Labbi

    2015-08-01

    Full Text Available Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency.In this work, a Particle Swarm Optimization (PSO is proposed for maximum power point tracker for photovoltaic panel, are used to generate the optimal MPP, such that solar panel maximum power is generated under different operating conditions. A photovoltaic system including a solar panel and PSO MPP tracker is modelled and simulated, it has been has been carried out which has shown the effectiveness of PSO to draw much energy and fast response against change in working conditions.

  18. 3D Navier-Stokes simulations of a rotor designed for maximum aerodynamic efficiency

    DEFF Research Database (Denmark)

    Johansen, Jeppe; Madsen Aagaard, Helge; Gaunaa, Mac

    2007-01-01

    a constant load was assumed. The rotor design was obtained using an Actuator Disc model and was subsequently verified using both a free wake Lifting Line method and a full 3D Navier-Stokes solver. Excellent agreement was obtained using the three models. Global mechanical power coefficient, CP, reached...... a value of slightly above 0.51, while global thrust coefficient, CT, was 0.87. The local power coefficient, Cp, increased to slightly above the Betz limit on the inner part of the rotor as well as the local thrust coefficient, Ct, increased to a value above 1.1. This agrees well with the theory of de...

  19. Sensotronic brake control. Braking with maximum efficiency; Die Sensotronic Brake Control. Bremsen auf hoechstem Niveau

    Energy Technology Data Exchange (ETDEWEB)

    Fischle, G.; Stoll, U.; Hinrichs, W.

    2002-05-01

    Sensotronic Brake Control (SBC) celebrated its world premiere when it was introduced into standard production along with the new SL in October 2001. This innovative brake system is also fitted as standard in the new E-Class. The design of the system components is identical to those used in the SL-Class. The software control parameters have been adapted to the conditions in the new saloon. (orig.) [German] Die Sensotronic Brake Control (SBC) wurde als Weltneuheit mit dem neuen SL im Oktober 2001 in Serie gebracht. Dieses innovative Bremssystem gehoert ebenfalls zur Serienausstattung der neuen E-Klasse. Die Systemkomponenten sind baugleich mit denen der SL-Klasse. Die Regelparameter der Software sind an die Verhaeltnisse der Limousine angepasst. (orig.)

  20. Determination of the Maximum Aerodynamic Efficiency of Wind Turbine Rotors with Winglets

    International Nuclear Information System (INIS)

    Gaunaa, Mac; Johansen, Jeppe

    2007-01-01

    The present work contains theoretical considerations and computational results on the nature of using winglets on wind turbines. The theoretical results presented show that the power augmentation obtainable with winglets is due to a reduction of tip-effects, and is not, as believed up to now, caused by the downwind vorticity shift due to downwind winglets. The numerical work includes optimization of the power coefficient for a given tip speed ratio and geometry of the span using a newly developed free wake lifting line code, which takes into account also viscous effects and self induced forces. Validation of the new code with CFD results for a rotor without winglets showed very good agreement. Results from the new code with winglets indicate that downwind winglets are superior to upwind ones with respect to optimization of Cp, and that the increase in power production is less than what may be obtained by a simple extension of the wing in the radial direction. The computations also show that shorter downwind winglets (>2%) come close to the increase in Cp obtained by a radial extension of the wing. Lastly, the results from the code are used to design a rotor with a 2% downwind winglet, which is computed using the Navier-Stokes solver EllipSys3D. These computations show that further work is needed to validate the FWLL code for cases where the rotor is equipped with winglets

  1. Determination of the Maximum Aerodynamic Efficiency of Wind Turbine Rotors with Winglets

    Energy Technology Data Exchange (ETDEWEB)

    Gaunaa, Mac; Johansen, Jeppe [Senior Scientists, Risoe National Laboratory, Roskilde, DK-4000 (Denmark)

    2007-07-15

    The present work contains theoretical considerations and computational results on the nature of using winglets on wind turbines. The theoretical results presented show that the power augmentation obtainable with winglets is due to a reduction of tip-effects, and is not, as believed up to now, caused by the downwind vorticity shift due to downwind winglets. The numerical work includes optimization of the power coefficient for a given tip speed ratio and geometry of the span using a newly developed free wake lifting line code, which takes into account also viscous effects and self induced forces. Validation of the new code with CFD results for a rotor without winglets showed very good agreement. Results from the new code with winglets indicate that downwind winglets are superior to upwind ones with respect to optimization of Cp, and that the increase in power production is less than what may be obtained by a simple extension of the wing in the radial direction. The computations also show that shorter downwind winglets (>2%) come close to the increase in Cp obtained by a radial extension of the wing. Lastly, the results from the code are used to design a rotor with a 2% downwind winglet, which is computed using the Navier-Stokes solver EllipSys3D. These computations show that further work is needed to validate the FWLL code for cases where the rotor is equipped with winglets.

  2. National Security Strategy and the Munitions' Paradox: Self-Sufficiency or Maximum Efficiency

    National Research Council Canada - National Science Library

    McChesney, Michael

    1998-01-01

    ... that the United States military strategy may not be credible to likely regional aggressors. Conversely, DoD acquisition leadership believes industry consolidation should continue and the munitions base should be expanded to include US allies...

  3. Finding optimum airfoil shape to get maximum aerodynamic efficiency for a wind turbine

    Science.gov (United States)

    Sogukpinar, Haci; Bozkurt, Ismail

    2017-02-01

    In this study, aerodynamic performances of S-series wind turbine airfoil of S 825 are investigated to find optimum angle of attack. Aerodynamic performances calculations are carried out by utilization of a Computational Fluid Dynamics (CFD) method withstand finite capacity approximation by using Reynolds-Averaged-Navier Stokes (RANS) theorem. The lift and pressure coefficients, lift to drag ratio of airfoil S 825 are analyzed with SST turbulence model then obtained results crosscheck with wind tunnel data to verify the precision of computational Fluid Dynamics (CFD) approximation. The comparison indicates that SST turbulence model used in this study can predict aerodynamics properties of wind blade.

  4. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  5. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  6. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    Science.gov (United States)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  7. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  8. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  9. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  10. Energy efficiency in pumps

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Durmus; Yagmur, E. Alptekin [TUBITAK-MRC, P.O. Box 21, 41470 Gebze, Kocaeli (Turkey); Yigit, K. Suleyman; Eren, A. Salih; Celik, Cenk [Engineering Faculty, Kocaeli University, Kocaeli (Turkey); Kilic, Fatma Canka [Department of Air Conditioning and Refrigeration, Kocaeli University, Kullar, Kocaeli (Turkey)

    2008-06-15

    In this paper, ''energy efficiency'' studies, done in a big industrial facility's pumps, are reported. For this purpose; the flow rate, pressure and temperature have been measured for each pump in different operating conditions and at maximum load. In addition, the electrical power drawn by the electric motor has been measured. The efficiencies of the existing pumps and electric motor have been calculated by using the measured data. Potential energy saving opportunities have been studied by taking into account the results of the calculations for each pump and electric motor. As a conclusion, improvements should be made each system. The required investment costs for these improvements have been determined, and simple payback periods have been calculated. The main energy saving opportunities result from: replacements of the existing low efficiency pumps, maintenance of the pumps whose efficiencies start to decline at certain range, replacements of high power electric motors with electric motors that have suitable power, usage of high efficiency electric motors and elimination of cavitation problems. (author)

  11. Energy efficiency in pumps

    International Nuclear Information System (INIS)

    Kaya, Durmus; Yagmur, E. Alptekin; Yigit, K. Suleyman; Kilic, Fatma Canka; Eren, A. Salih; Celik, Cenk

    2008-01-01

    In this paper, 'energy efficiency' studies, done in a big industrial facility's pumps, are reported. For this purpose; the flow rate, pressure and temperature have been measured for each pump in different operating conditions and at maximum load. In addition, the electrical power drawn by the electric motor has been measured. The efficiencies of the existing pumps and electric motor have been calculated by using the measured data. Potential energy saving opportunities have been studied by taking into account the results of the calculations for each pump and electric motor. As a conclusion, improvements should be made each system. The required investment costs for these improvements have been determined, and simple payback periods have been calculated. The main energy saving opportunities result from: replacements of the existing low efficiency pumps, maintenance of the pumps whose efficiencies start to decline at certain range, replacements of high power electric motors with electric motors that have suitable power, usage of high efficiency electric motors and elimination of cavitation problems

  12. Depth of maximum of air-shower profiles at the Pierre Auger Observatory. II. Composition implications

    Czech Academy of Sciences Publication Activity Database

    Aab, A.; Abreu, P.; Aglietta, M.; Boháčová, Martina; Chudoba, Jiří; Ebr, Jan; Mandát, Dušan; Nečesal, Petr; Palatka, Miroslav; Pech, Miroslav; Prouza, Michael; Řídký, Jan; Schovánek, Petr; Trávníček, Petr; Vícha, Jakub

    2014-01-01

    Roč. 90, č. 12 (2014), "122006-1"-"122006-12" ISSN 1550-7998 R&D Projects: GA MŠk(CZ) LG13007; GA MŠk(CZ) 7AMB14AR005; GA ČR(CZ) GA14-17501S Institutional support: RVO:68378271 Keywords : Pierre Auger Observatory * air- shower * fluorescence telescopes Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 4.643, year: 2014

  13. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  14. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  15. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  16. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  17. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  18. On the Five-Moment Hamburger Maximum Entropy Reconstruction

    Science.gov (United States)

    Summy, D. P.; Pullin, D. I.

    2018-05-01

    We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.

  19. Maximum intensity projection MR angiography using shifted image data

    International Nuclear Information System (INIS)

    Machida, Yoshio; Ichinose, Nobuyasu; Hatanaka, Masahiko; Goro, Takehiko; Kitake, Shinichi; Hatta, Junicchi.

    1992-01-01

    The quality of MR angiograms has been significantly improved in past several years. Spatial resolution, however, is not sufficient for clinical use. On the other hand, MR image data can be filled at anywhere using Fourier shift theorem, and the quality of multi-planar reformed image has been reported to be improved remarkably using 'shifted data'. In this paper, we have clarified the efficiency of 'shifted data' for maximum intensity projection MR angiography. Our experimental studies and theoretical consideration showd that the quality of MR angiograms has been significantly improved using 'shifted data' as follows; 1) remarkable reduction of mosaic artifact, 2) improvement of spatial continuity for the blood vessels, and 3) reduction of variance for the signal intensity along the blood vessels. In other words, the angiograms looks much 'finer' than conventional ones, although the spatial resolution is not improved theoretically. Furthermore, we found the quality of MR angiograms dose not improve significantly with the 'shifted data' more than twice as dense as ordinal ones. (author)

  20. Maximum likelihood pedigree reconstruction using integer linear programming.

    Science.gov (United States)

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.

  1. Feedback Limits to Maximum Seed Masses of Black Holes

    International Nuclear Information System (INIS)

    Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea

    2017-01-01

    The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  2. Energetic aspects of skeletal muscle contraction: implications of fiber types.

    Science.gov (United States)

    Rall, J A

    1985-01-01

    In this chapter fundamental energetic properties of skeletal muscles as elucidated from isolated muscle preparations are described. Implications of these intrinsic properties for the energetic characterization of different fiber types and for the understanding of locomotion have been considered. Emphasis was placed on the myriad of physical and chemical techniques that can be employed to understand muscle energetics and on the interrelationship of results from different techniques. The anaerobic initial processes which liberate energy during contraction and relaxation are discussed in detail. The high-energy phosphate (approximately P) utilized during contraction and relaxation can be distributed between actomyosin ATPase or cross-bridge cycling (70%) and the Ca2+ ATPase of the sacroplasmic reticulum (30%). Muscle shortening increases the rate of approximately P hydrolysis, and stretching a muscle during contraction suppresses the rate of approximately P hydrolysis. The economy of an isometric contraction is defined as the ratio of isometric mechanical response to energetic cost and is shown to be a fundamental intrinsic parameter describing muscle energetics. Economy of contraction varies across the animal kingdom by over three orders of magnitude and is different in different mammalian fiber types. In mammalian skeletal muscles differences in economy of contraction can be attributed mainly to differences in the specific actomyosin and Ca2+ ATPase of muscles. Furthermore, there is an inverse relationship between economy of contraction and maximum velocity of muscle shortening (Vmax) and maximum power output. This is a fundamental relationship. Muscles cannot be economical at developing and maintaining force and also exhibit rapid shortening. Interestingly, there appears to be a subtle system of unknown nature that modulates the Vmax and economy of contraction. Efficiency of a work-producing contraction is defined and contrasted to the economy of contraction

  3. Estimation of Maximum Allowable PV Connection to LV Residential Power Networks

    DEFF Research Database (Denmark)

    Demirok, Erhan; Sera, Dezso; Teodorescu, Remus

    2011-01-01

    Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...... potential of geographic area due to power network limitations even though all rooftops are fully occupied with PV modules. Therefore, it becomes more of an issue to know what exactly limits higher PV penetration level and which solutions should be engaged efficiently such as over sizing distribution...

  4. A polynomial time algorithm for solving the maximum flow problem in directed networks

    International Nuclear Information System (INIS)

    Tlas, M.

    2015-01-01

    An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)

  5. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  6. Energy efficiency standards and innovation

    Science.gov (United States)

    Morrison, Geoff

    2015-01-01

    Van Buskirk et al (2014 Environ. Res. Lett. 9 114010) demonstrate that the purchase price, lifecycle cost and price of improving efficiency (i.e. the incremental price of efficiency gain) decline at an accelerated rate following the adoption of the first energy efficiency standards for five consumer products. The authors show these trends using an experience curve framework (i.e. price/cost versus cumulative production). While the paper does not draw a causal link between standards and declining prices, they provide suggestive evidence using markets in the US and Europe. Below, I discuss the potential implications of the work.

  7. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  8. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  9. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    Science.gov (United States)

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-02

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.

  10. Batch efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Schwickerath, Ulrich; Silva, Ricardo; Uria, Christian, E-mail: Ulrich.Schwickerath@cern.c, E-mail: Ricardo.Silva@cern.c [CERN IT, 1211 Geneve 23 (Switzerland)

    2010-04-01

    A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.

  11. Understanding the Role of Reservoir Size on Probable Maximum Precipitation

    Science.gov (United States)

    Woldemichael, A. T.; Hossain, F.

    2011-12-01

    This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the

  12. Maximum Power Point Tracking (MPPT Pada Sistem Pembangkit Listrik Tenaga Angin Menggunakan Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    Muhamad Otong

    2017-05-01

    Full Text Available In this paper, the implementation of the Maximum Power Point Tracking (MPPT technique is developed using buck-boost converter. Perturb and observe (P&O MPPT algorithm is used to searching maximum power from the wind power plant for charging of the battery. The model used in this study is the Variable Speed Wind Turbine (VSWT with a Permanent Magnet Synchronous Generator (PMSG. Analysis, design, and modeling of wind energy conversion system has done using MATLAB/simulink. The simulation results show that the proposed MPPT produce a higher output power than the system without MPPT. The average efficiency that can be achieved by the proposed system to transfer the maximum power into battery is 90.56%.

  13. Overview of Maximum Power Point Tracking Techniques for Photovoltaic Energy Production Systems

    DEFF Research Database (Denmark)

    Koutroulis, Eftichios; Blaabjerg, Frede

    2015-01-01

    A substantial growth of the installed photovoltaic systems capacity has occurred around the world during the last decade, thus enhancing the availability of electric energy in an environmentally friendly way. The maximum power point tracking technique enables maximization of the energy production...... of photovoltaic sources during stochastically varying solar irradiation and ambient temperature conditions. Thus, the overall efficiency of the photovoltaic energy production system is increased. Numerous techniques have been presented during the last decade for implementing the maximum power point tracking...... process in a photovoltaic system. This article provides an overview of the operating principles of these techniques, which are suited for either uniform or non-uniform solar irradiation conditions. The operational characteristics and implementation requirements of these maximum power point tracking...

  14. Development of an Intelligent Maximum Power Point Tracker Using an Advanced PV System Test Platform

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Amoiridis, Anastasios; Beres, Remus Narcis

    2013-01-01

    The performance of photovoltaic systems is often reduced by the presence of partial shadows. The system efficiency and availability can be improved by a maximum power point tracking algorithm that is able to detect partial shadow conditions and to optimize the power output. This work proposes...... an intelligent maximum power point tracking method that monitors the maximum power point voltage and triggers a current-voltage sweep only when a partial shadow is detected, therefore minimizing power loss due to repeated current-voltage sweeps. The proposed system is validated on an advanced, flexible...... photovoltaic inverter system test platform that is able to reproduce realistic partial shadow conditions, both in simulation and on hardware test system....

  15. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  16. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  17. Performance Analysis of a Maximum Power Point Tracking Technique using Silver Mean Method

    Directory of Open Access Journals (Sweden)

    Shobha Rani Depuru

    2018-01-01

    Full Text Available The proposed paper presents a simple and particularly efficacious Maximum Power Point Tracking (MPPT algorithm based on Silver Mean Method (SMM. This method operates by choosing a search interval from the P-V characteristics of the given solar array and converges to MPP of the Solar Photo-Voltaic (SPV system by shrinking its interval. After achieving the maximum power, the algorithm stops shrinking and maintains constant voltage until the next interval is decided. The tracking capability efficiency and performance analysis of the proposed algorithm are validated by the simulation and experimental results with a 100W solar panel for variable temperature and irradiance conditions. The results obtained confirm that even without any perturbation and observation process, the proposed method still outperforms the traditional perturb and observe (P&O method by demonstrating far better steady state output, more accuracy and higher efficiency.

  18. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

    Directory of Open Access Journals (Sweden)

    Nicholas Scott Cardell

    2013-05-01

    Full Text Available Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

  19. Realworld maximum power point tracking simulation of PV system based on Fuzzy Logic control

    Science.gov (United States)

    Othman, Ahmed M.; El-arini, Mahdi M. M.; Ghitas, Ahmed; Fathy, Ahmed

    2012-12-01

    In the recent years, the solar energy becomes one of the most important alternative sources of electric energy, so it is important to improve the efficiency and reliability of the photovoltaic (PV) systems. Maximum power point tracking (MPPT) plays an important role in photovoltaic power systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize their array efficiency. This paper presents a maximum power point tracker (MPPT) using Fuzzy Logic theory for a PV system. The work is focused on the well known Perturb and Observe (P&O) algorithm and is compared to a designed fuzzy logic controller (FLC). The simulation work dealing with MPPT controller; a DC/DC Ćuk converter feeding a load is achieved. The results showed that the proposed Fuzzy Logic MPPT in the PV system is valid.

  20. Realworld maximum power point tracking simulation of PV system based on Fuzzy Logic control

    Directory of Open Access Journals (Sweden)

    Ahmed M. Othman

    2012-12-01

    Full Text Available In the recent years, the solar energy becomes one of the most important alternative sources of electric energy, so it is important to improve the efficiency and reliability of the photovoltaic (PV systems. Maximum power point tracking (MPPT plays an important role in photovoltaic power systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize their array efficiency. This paper presents a maximum power point tracker (MPPT using Fuzzy Logic theory for a PV system. The work is focused on the well known Perturb and Observe (P&O algorithm and is compared to a designed fuzzy logic controller (FLC. The simulation work dealing with MPPT controller; a DC/DC Ćuk converter feeding a load is achieved. The results showed that the proposed Fuzzy Logic MPPT in the PV system is valid.

  1. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  2. Feedback Limits to Maximum Seed Masses of Black Holes

    Energy Technology Data Exchange (ETDEWEB)

    Pacucci, Fabio; Natarajan, Priyamvada [Department of Physics, Yale University, P.O. Box 208121, New Haven, CT 06520 (United States); Ferrara, Andrea [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2017-02-01

    The most massive black holes observed in the universe weigh up to ∼10{sup 10} M {sub ⊙}, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M {sub •} ≳ 10{sup 4} M {sub ⊙}) hosted in small isolated halos ( M {sub h} ≲ 10{sup 9} M {sub ⊙}) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M {sub •}– σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10{sup 4–6} M {sub ⊙}, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  3. Interpretation of the depths of maximum of extensive air showers measured by the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Abreu, Pedro; et al.

    2013-02-01

    To interpret the mean depth of cosmic ray air shower maximum and its dispersion, we parametrize those two observables as functions of the first two moments of the ln A distribution. We examine the goodness of this simple method through simulations of test mass distributions. The application of the parameterization to Pierre Auger Observatory data allows one to study the energy dependence of the mean ln A and of its variance under the assumption of selected hadronic interaction models. We discuss possible implications of these dependences in term of interaction models and astrophysical cosmic ray sources.

  4. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  5. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  6. Concentration and Health Implication of Heavy Metals in Drinking ...

    African Journals Online (AJOL)

    Concentration and Health Implication of Heavy Metals in Drinking Water from Urban ... water is not mentioned by WHO, but all the samples analyzed were found to ... Key words: Drinking water quality, Heavy metals, Maximum admissible limit, ...

  7. Efficiency gains, bounds, and risk in finance

    NARCIS (Netherlands)

    Sarisoy, Cisil

    2015-01-01

    This thesis consists of three chapters. The first chapter analyzes efficiency gains in the estimation of expected returns based on asset pricing models and examines the economic implications of such gains in portfolio allocation exercises. The second chapter provides nonparametric efficiency bounds

  8. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    OpenAIRE

    Muhida, Riza; Mohamad, Nor Hilmi; Legowo, Ari; Irawan, Rudi; Astuti, Winda

    2013-01-01

    Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV) modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT) method is applied to the traffic light system. MPPT is intended to catch up th...

  9. Reconstruction of the electron momentum density distribution by the maximum entropy method

    International Nuclear Information System (INIS)

    Dobrzynski, L.

    1996-01-01

    The application of the Maximum Entropy Algorithm to the analysis of the Compton profiles is discussed. It is shown that the reconstruction of electron momentum density may be reliably carried out. However, there are a number of technical problems which have to be overcome in order to produce trustworthy results. In particular one needs the experimental Compton profiles measured for many directions, and to have efficient computational resources. The use of various cross-checks is recommended. (orig.)

  10. Optimizing the top profile of a nanowire for maximum forward emission

    Institute of Scientific and Technical Information of China (English)

    Wang Dong-Lin; Yu Zhong-Yuan; Liu Yu-Min; Guo Xiao-Tao; Cao Gui; Feng Hao

    2011-01-01

    The optimal top structure of a nanowire quantum emitter single photon source is significant in improving performance.Based on the axial symmetry of a cylindrical nanowire,this paper optimizes the top profile of a nanowire for the maximum forward emission by combining the geometry projection method and the finite element method.The results indicate that the nanowire with a cambered top has the stronger emission in the forward direction,which is helpful to improve the photon collection efficiency.

  11. A novel high efficiency solar photovoltalic pump

    NARCIS (Netherlands)

    Diepens, J.F.L.; Smulders, P.T.; Vries, de D.A.

    1993-01-01

    The daily average overall efficiency of a solar pump system is not only influenced by the maximum efficiency of the components of the system, but just as much by the correct matching of the components to the local irradiation pattern. Normally centrifugal pumps are used in solar pump systems. The

  12. Intelligent Maximum Power Point Tracking Using Fuzzy Logic for Solar Photovoltaic Systems Under Non-Uniform Irradiation Conditions

    OpenAIRE

    P. Selvam; S. Senthil Kumar

    2016-01-01

    Maximum Power Point Tracking (MPPT) has played a vital role to enhance the efficiency of solar photovoltaic (PV) power generation under varying atmospheric temperature and solar irradiation. However, it is hard to track the maximum power point using conventional linear controllers due to the natural inheritance of nonlinear I-V and P-V characteristics of solar PV systems. Fuzzy Logic Controller (FLC) is suitable for nonlinear system control applications and eliminating oscillations, circuit c...

  13. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    Directory of Open Access Journals (Sweden)

    Riza Muhida

    2013-07-01

    Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.

  14. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  15. A novel algorithm for single-axis maximum power generation sun trackers

    International Nuclear Information System (INIS)

    Lee, Kung-Yen; Chung, Chi-Yao; Huang, Bin-Juine; Kuo, Ting-Jung; Yang, Huang-Wei; Cheng, Hung-Yen; Hsu, Po-Chien; Li, Kang

    2017-01-01

    Highlights: • A novel algorithm for a single-axis sun tracker is developed to increase the efficiency. • Photovoltaic module is rotated to find the optimal angle for generating the maximum power. • Electric energy increases up to 8.3%, compared with that of the tracker with three fixed angles. • The rotation range is optimized to reduce energy consumption from the rotation operations. - Abstract: The purpose of this study is to develop a novel algorithm for a single-axis maximum power generation sun tracker in order to identify the optimal stopping angle for generating the maximum amount of daily electric energy. First, the photovoltaic modules of the single-axis maximum power generation sun tracker are automatically rotated from 50° east to 50° west. During the rotation, the instantaneous power generated at different angles is recorded and compared, meaning that the optimal angle for generating the maximum power can be determined. Once the rotation (detection) is completed, the photovoltaic modules are then rotated to the resulting angle for generating the maximum power. The photovoltaic module is rotated once per hour in an attempt to detect the maximum irradiation and overcome the impact of environmental effects such as shading from cloud cover, other photovoltaic modules and surrounding buildings. Furthermore, the detection range is halved so as to reduce the energy consumption from the rotation operations and to improve the reliability of the sun tracker. The results indicate that electric energy production is increased by 3.4% in spring and autumn, 5.4% in summer, and 8.3% in winter, compared with that of the same sun tracker with three fixed angles of 50° east in the morning, 0° at noon and 50° west in the afternoon.

  16. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  17. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Science.gov (United States)

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  18. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  19. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  20. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  1. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  2. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  3. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...

  4. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  5. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  6. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...

  7. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  8. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  9. Energy efficiency

    International Nuclear Information System (INIS)

    Marvillet, Ch.; Tochon, P.; Mercier, P.

    2004-01-01

    World energy demand is constantly rising. This is a legitimate trend, insofar as access to energy enables enhanced quality of life and sanitation levels for populations. On the other hand, such increased consumption generates effects that may be catastrophic for the future of the planet (climate change, environmental imbalance), should this growth conform to the patterns followed, up to recent times, by most industrialized countries. Reduction of greenhouse gas emissions, development of new energy sources and energy efficiency are seen as the major challenges to be taken up for the world of tomorrow. In France, the National Energy Debate indeed emphasized, in 2003, the requirement to control both demand for, and offer of, energy, through a strategic orientation law for energy. The French position corresponds to a slightly singular situation - and a privileged one, compared to other countries - owing to massive use of nuclear power for electricity generation. This option allows France to be responsible for a mere 2% of worldwide greenhouse gas emissions. Real advances can nonetheless still be achieved as regards improved energy efficiency, particularly in the transportation and residential-tertiary sectors, following the lead, in this respect, shown by industry. These two sectors indeed account for over half of the country CO 2 emissions (26% and 25% respectively). With respect to transportation, the work carried out by CEA on the hydrogen pathway, energy converters, and electricity storage has been covered by the preceding chapters. As regards housing, a topic addressed by one of the papers in this chapter, investigations at CEA concern integration of the various devices enabling value-added use of renewable energies. At the same time, the organization is carrying through its activity in the extensive area of heat exchangers, allowing industry to benefit from improved understanding in the modeling of flows. An activity evidenced by advances in energy efficiency for

  10. Efficient Inorganic Perovskite Light-Emitting Diodes with Polyethylene Glycol Passivated Ultrathin CsPbBr3 Films.

    Science.gov (United States)

    Song, Li; Guo, Xiaoyang; Hu, Yongsheng; Lv, Ying; Lin, Jie; Liu, Zheqin; Fan, Yi; Liu, Xingyuan

    2017-09-07

    Efficient inorganic perovskite light-emitting diodes (PeLEDs) with an ultrathin perovskite emission layer (∼30 nm) were realized by doping Lewis base polyethylene glycol (PEG) into CsPbBr 3 films. PEG in the perovskite films not only physically fills the crystal boundaries but also interacts with the perovskite crystals to passivate the crystal grains, reduce nonradiative recombination, and ensure efficient luminance and high efficiency. As a result, promoted brightness, current efficiency (CE), and external quantum efficiency (EQE) were achieved. The nonradiative decay rate of the PEG:CsPbBr 3 composite film is 1 order of magnitude less than that of the neat CsPbBr 3 film. After further optimization of the molar ratio between CsBr and PbBr 2 , a peak CE of 19 cd/A, a maximum EQE of 5.34%, and a maximum brightness of 36600 cd/m 2 were achieved, demonstrating the interaction between PEG and the precursors. The results are expected to offer some helpful implications in optimizing the polymer-assisted PeLEDs with ultrathin emission layers, which might have potential application in see-through displays.

  11. Offsetting efficiency

    International Nuclear Information System (INIS)

    Katz, M.

    1995-01-01

    Whichever way the local distribution company (LDC) tries to convert residential customers to gas or expand their use of it, the process itself has become essential for the natural gas industry. The amount of gas used by each residential customer has been decreasing for 25 years -- since the energy crisis of the early 1970s. It's a direct result of better-insulated homes and more-efficient gas appliances, and that trend is continuing. So, LDCs have a choice of either finding new users and uses for gas, or recognizing that their throughput per customer is going to continue declining. The paper discusses strategies that several gas utilities are using to increase the number of gas appliances in the customer's homes. These and other strategies keep the gas industry optimistic about the future of the residential market: A.G.A. has projected that by 2010 demand will expand, from 1994's 5.1 quadrillion Btu (quads) to 5.7 quads, even with continued improvements in appliance efficiency. That estimate, however, will depend on the industry-s utilities and whether they keep converting, proselytizing, persuading and influencing customers to use more natural gas

  12. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  13. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  14. Current control of PMSM based on maximum torque control reference frame

    Science.gov (United States)

    Ohnuma, Takumi

    2017-07-01

    This study presents a new method of current controls of PMSMs (Permanent Magnet Synchronous Motors) based on a maximum torque control reference frame, which is suitable for high-performance controls of the PMSMs. As the issues of environment and energy increase seriously, PMSMs, one of the AC motors, are becoming popular because of their high-efficiency and high-torque density in various applications, such as electric vehicles, trains, industrial machines, and home appliances. To use the PMSMs efficiently, a proper current control of the PMSMs is necessary. In general, a rotational coordinate system synchronizing with the rotor is used for the current control of PMSMs. In the rotating reference frame, the current control is easier because the currents on the rotating reference frame can be expressed as a direct current in the controller. On the other hand, the torque characteristics of PMSMs are non-linear and complex; the PMSMs are efficient and high-density though. Therefore, a complicated control system is required to involve the relation between the torque and the current, even though the rotating reference frame is adopted. The maximum torque control reference frame provides a simpler way to control efficiently the currents taking the torque characteristics of the PMSMs into consideration.

  15. Parameters determining maximum wind velocity in a tropical cyclone

    International Nuclear Information System (INIS)

    Choudhury, A.M.

    1984-09-01

    The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)

  16. Cocktail mismatch effects in 4πβ liquid scintillation spectrometry: implications based on the systematics of 3H detection efficiency and quench indicating parameter variations with total cocktail mass (volume) and H2O fraction

    International Nuclear Information System (INIS)

    Colle, R.

    1997-01-01

    Detection efficiency changes for 3 H by 4πβ liquid scintillation (LS) spectrometry cannot be adequately monitored by quench indicating parameters when the quench changes are the result of multiple causal factors (e.g. simultaneously varying cocktail sizes and composition). In consequence, some kinds of cocktail mismatches (between LS counting sources) introduce errors that result from efficiency changes that cannot be fully accounted for by quench monitoring compensations. These cocktails mismatch effects are examined for comparative 3 H measurements and for 3 H-standard efficiency tracing methods for the assay of other β-emitting radionuclides. Inherent errors can occur in both types of radionuclide assays, as demonstrated with realistic case examples, unless cocktails are very closely matched. The magnitude of the cocktail mismatch effect (and attendant errors) can range from being virtually negligible (particularly for high-energy β-emitting nuclides and for slight single-variable cocktail composition mismatches) to be being very significant for high-precision metrology and standardizations (particularly with easily quenched, low-energy β emitters and for mismatches due to both varying cocktail constituents and concentrations). The findings presented here support the need to understand fully the quenching systematics of a given LS system (combination of cocktails and spectrometer) and the need for very careful control of cocktail preparations. (author)

  17. Efficient, Differentially Private Point Estimators

    OpenAIRE

    Smith, Adam

    2008-01-01

    Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...

  18. Maximum standard metabolic rate corresponds with the salinity of maximum growth in hatchlings of the estuarine northern diamondback terrapin (Malaclemys terrapin terrapin): Implications for habitat conservation

    Science.gov (United States)

    Rowe, Christopher L.

    2018-01-01

    I evaluated standard metabolic rates (SMR) of hatchling northern diamondback terrapins (Malaclemys terrapin terrapin) across a range of salinities (salinity = 1.5, 4, 8, 12, and 16 psu) that they may encounter in brackish habitats such as those in the Maryland portion of the Chesapeake Bay, U.S.A. Consumption of O2 and production of CO2 by resting, unfed animals served as estimates of SMR. A peak in SMR occurred at 8 psu which corresponds closely with the salinity at which hatchling growth was previously shown to be maximized (salinity ∼ 9 psu). It appears that SMR is influenced by growth, perhaps reflecting investments in catabolic pathways that fuel anabolism. This ecophysiological information can inform environmental conservation and management activities by identifying portions of the estuary that are bioenergetically optimal for growth of hatchling terrapins. I suggest that conservation and restoration efforts to protect terrapin populations in oligo-to mesohaline habitats should prioritize protection or creation of habitats in regions where average salinity is near 8 psu and energetic investments in growth appear to be maximized.

  19. CO2 maximum in the oxygen minimum zone (OMZ

    Directory of Open Access Journals (Sweden)

    V. Garçon

    2011-02-01

    Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence

  20. CO2 maximum in the oxygen minimum zone (OMZ)

    Science.gov (United States)

    Paulmier, A.; Ruiz-Pino, D.; Garçon, V.

    2011-02-01

    Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2Chile during 4 cruises (2000-2002) and a monthly monitoring (2000-2001) in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the

  1. Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)

    Data.gov (United States)

    NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...

  2. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  3. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  4. Ethylene Production Maximum Achievable Control Technology (MACT) Compliance Manual

    Science.gov (United States)

    This July 2006 document is intended to help owners and operators of ethylene processes understand and comply with EPA's maximum achievable control technology standards promulgated on July 12, 2002, as amended on April 13, 2005 and April 20, 2006.

  5. ORIGINAL ARTICLES Surgical practice in a maximum security prison

    African Journals Online (AJOL)

    Prison Clinic, Mangaung Maximum Security Prison, Bloemfontein. F Kleinhans, BA (Cur) .... HIV positivity rate and the use of the rectum to store foreign objects. ... fruit in sunlight. Other positive health-promoting factors may also play a role,.

  6. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  7. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  8. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  9. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  10. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  11. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  12. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  13. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  14. A tropospheric ozone maximum over the equatorial Southern Indian Ocean

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We examine the distribution of tropical tropospheric ozone (O3 from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O3 during 2005 to 2009 reveal a distinct, persistent O3 maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O3 observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O3 maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O3 maximum is dominated by the O3 production driven by lightning nitrogen oxides (NOx emissions, which accounts for 62% of the tropospheric column O3 in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O3 maximum are rather small. The O3 productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O3 maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O3 maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.

  15. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    OpenAIRE

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...

  16. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  17. On semidefinite programming relaxations of maximum k-section

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.

    2012-01-01

    We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl

  18. Theoretical Evaluation of the Maximum Work of Free-Piston Engine Generators

    Science.gov (United States)

    Kojima, Shinji

    2017-01-01

    Utilizing the adjoint equations that originate from the calculus of variations, we have calculated the maximum thermal efficiency that is theoretically attainable by free-piston engine generators considering the work loss due to friction and Joule heat. Based on the adjoint equations with seven dimensionless parameters, the trajectory of the piston, the histories of the electric current, the work done, and the two kinds of losses have been derived in analytic forms. Using these we have conducted parametric studies for the optimized Otto and Brayton cycles. The smallness of the pressure ratio of the Brayton cycle makes the net work done negative even when the duration of heat addition is optimized to give the maximum amount of heat addition. For the Otto cycle, the net work done is positive, and both types of losses relative to the gross work done become smaller with the larger compression ratio. Another remarkable feature of the optimized Brayton cycle is that the piston trajectory of the heat addition/disposal process is expressed by the same equation as that of an adiabatic process. The maximum thermal efficiency of any combination of isochoric and isobaric heat addition/disposal processes, such as the Sabathe cycle, may be deduced by applying the methods described here.

  19. Efficient STFT

    International Nuclear Information System (INIS)

    Aamir, K.M.; Maud, M.A.

    2004-01-01

    Small perturbations in signals (or any time series), at some particular instant, affect the whole frequency spectrum due to the global function e/sup j omega t/ in Fourier Transform formulation. However, the Fourier spectrum does not convey the time instant at which the perturbation occurred. Consequently the information on the particular time instance of occurrence of that perturbation is lost when spectrum is observed. Apparently Fourier analysis seems to be inadequate in such situations. This inadequacy is overcome by the use of Short Time Fourier Transform (STFT), which keeps track of time as well as frequency information. In STFT analysis, a fixed length window, say of length N, is moved sample by sample as the data arrives. The Discrete Fourier Transform (DFT) of this fixed window of length N is calculated using Fast Fourier Transform (FFT) algorithm. If the total number of points is M > N, the computational complexity of this scheme works out to be at least ((M-N) N log/sub 2/N). On the other hand, STFT is shown to be of computational complexity 6NM and 8NM in the literature. In this paper, two algorithms are presented which compute the same STFT more efficiently. The computational complexity of the proposed algorithms works out to be MN of one algorithm and even lesser in the other algorithm. This reduction in complexity becomes significant for large data sets. This algorithm also remains valid if a stationary part of signal is skipped. (author)

  20. Improvement of maximum power point tracking perturb and observe algorithm for a standalone solar photovoltaic system

    International Nuclear Information System (INIS)

    Awan, M.M.A.; Awan, F.G.

    2017-01-01

    Extraction of maximum power from PV (Photovoltaic) cell is necessary to make the PV system efficient. Maximum power can be achieved by operating the system at MPP (Maximum Power Point) (taking the operating point of PV panel to MPP) and for this purpose MPPT (Maximum Power Point Trackers) are used. There are many tracking algorithms/methods used by these trackers which includes incremental conductance, constant voltage method, constant current method, short circuit current method, PAO (Perturb and Observe) method, and open circuit voltage method but PAO is the mostly used algorithm because it is simple and easy to implement. PAO algorithm has some drawbacks, one is low tracking speed under rapid changing weather conditions and second is oscillations of PV systems operating point around MPP. Little improvement is achieved in past papers regarding these issues. In this paper, a new method named 'Decrease and Fix' method is successfully introduced as improvement in PAO algorithm to overcome these issues of tracking speed and oscillations. Decrease and fix method is the first successful attempt with PAO algorithm for stability achievement and speeding up of tracking process in photovoltaic system. Complete standalone photovoltaic system's model with improved perturb and observe algorithm is simulated in MATLAB Simulink. (author)

  1. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  2. Maximizing Output Power of a Solar Panel via Combination of Sun Tracking and Maximum Power Point Tracking by Fuzzy Controllers

    Directory of Open Access Journals (Sweden)

    Mohsen Taherbaneh

    2010-01-01

    Full Text Available In applications with low-energy conversion efficiency, maximizing the output power improves the efficiency. The maximum output power of a solar panel depends on the environmental conditions and load profile. In this paper, a method based on simultaneous use of two fuzzy controllers is developed in order to maximize the generated output power of a solar panel in a photovoltaic system: fuzzy-based sun tracking and maximum power point tracking. The sun tracking is performed by changing the solar panel orientation in horizontal and vertical directions by two DC motors properly designed. A DC-DC converter is employed to track the solar panel maximum power point. In addition, the proposed system has the capability of the extraction of solar panel I-V curves. Experimental results present that the proposed fuzzy techniques result in increasing of power delivery from the solar panel, causing a reduction in size, weight, and cost of solar panels in photovoltaic systems.

  3. Comparative Analysis of Maximum Power Point Tracking Controllers under Partial Shaded Conditions in a Photovoltaic System

    Directory of Open Access Journals (Sweden)

    R. Ramaprabha

    2015-06-01

    Full Text Available Mismatching effects due to partial shaded conditions are the major drawbacks existing in today’s photovoltaic (PV systems. These mismatch effects are greatly reduced in distributed PV system architecture where each panel is effectively decoupled from its neighboring panel. To obtain the optimal operation of the PV panels, maximum power point tracking (MPPT techniques are used. In partial shaded conditions, detecting the maximum operating point is difficult as the characteristic curves are complex with multiple peaks. In this paper, a neural network control technique is employed for MPPT. Detailed analyses were carried out on MPPT controllers in centralized and distributed architecture under partial shaded environments. The efficiency of the MPPT controllers and the effectiveness of the proposed control technique under partial shaded environments was examined using MATLAB software. The results were validated through experimentation.

  4. An Improvement of a Fuzzy Logic-Controlled Maximum Power Point Tracking Algorithm for Photovoltic Applications

    Directory of Open Access Journals (Sweden)

    Woonki Na

    2017-03-01

    Full Text Available This paper presents an improved maximum power point tracking (MPPT algorithm using a fuzzy logic controller (FLC in order to extract potential maximum power from photovoltaic cells. The objectives of the proposed algorithm are to improve the tracking speed, and to simultaneously solve the inherent drawbacks such as slow tracking in the conventional perturb and observe (P and O algorithm. The performances of the conventional P and O algorithm and the proposed algorithm are compared by using MATLAB/Simulink in terms of the tracking speed and steady-state oscillations. Additionally, both algorithms were experimentally validated through a digital signal processor (DSP-based controlled-boost DC-DC converter. The experimental results show that the proposed algorithm performs with a shorter tracking time, smaller output power oscillation, and higher efficiency, compared with the conventional P and O algorithm.

  5. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  6. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  7. Recent Developments in Maximum Power Point Tracking Technologies for Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Nevzat Onat

    2010-01-01

    Full Text Available In photovoltaic (PV system applications, it is very important to design a system for operating of the solar cells (SCs under best conditions and highest efficiency. Maximum power point (MPP varies depending on the angle of sunlight on the surface of the panel and cell temperature. Hence, the operating point of the load is not always MPP of PV system. Therefore, in order to supply reliable energy to the load, PV systems are designed to include more than the required number of modules. The solution to this problem is that switching power converters are used, that is called maximum power point tracker (MPPT. In this study, the various aspects of these algorithms have been analyzed in detail. Classifications, definitions, and basic equations of the most widely used MPPT technologies are given. Moreover, a comparison was made in the conclusion.

  8. Lower Bounds on the Maximum Energy Benefit of Network Coding for Wireless Multiple Unicast

    Directory of Open Access Journals (Sweden)

    Matsumoto Ryutaroh

    2010-01-01

    Full Text Available We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding solutions, where the maximum is over all configurations. It is shown that if coding and routing solutions are using the same transmission range, the benefit in d-dimensional networks is at least . Moreover, it is shown that if the transmission range can be optimized for routing and coding individually, the benefit in 2-dimensional networks is at least 3. Our results imply that codes following a decode-and-recombine strategy are not always optimal regarding energy efficiency.

  9. Determination of maximum negative Poisson's ratio for laminated fiber composites

    Energy Technology Data Exchange (ETDEWEB)

    Shokrieh, M.M.; Assadi, A. [Composites Research Laboratory, Mechanical Engineering Department, Center of Excellence in Experimental Solid Mechanics and Dynamics, Iran University of Science and Technology, Tehran 16846-13114 (Iran, Islamic Republic of)

    2011-05-15

    Contrary to isotropic materials, composites always show complicated mechanical behavior under external loadings. In this article, an efficient algorithm is employed to obtain the maximum negative Poisson's ratio for laminated composite plates. We try to simplify the problem based on normalization of parameters and some manufacturing constraints to overlook the additional constraint of the optimization procedure. A genetic algorithm is used to find the optimal thickness of each lamina with a specified fiber direction. It is observed that the laminated composite with the configuration of (15/60/15) has the maximum negative Poisson's ratio. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  10. Improving maximum power point tracking of partially shaded photovoltaic system by using IPSO-BELBIC

    International Nuclear Information System (INIS)

    El-Garhy, M. Abd Al-Alim; Mubarak, R.I.; El-Bably, M.

    2017-01-01

    Solar photovoltaic (PV) arrays in remote applications are often related to the rapid changes in the partial shading pattern. Rapid changes of the partial shading pattern make the tracking of maximum power point (MPP) of the global peak through the local ones too difficult. An essential need to make a fast and efficient algorithm to detect the peaks values which always vary as the sun irradiance changes. This paper presents two algorithms based on the improved particle swarm optimization technique one of them with PID controller (IPSO-PID), and the other one with Brain Emotional Learning Based Intelligent Controller (IPSO-BELBIC). These techniques improve the maximum power point (MPP) tracking capabilities for photovoltaic (PV) system under partial shading circumstances. The main aim of these improved algorithms is to accelerate the velocity of IPSO to reach to (MPP) and increase its efficiency. These algorithms also improve the tracking time under complex irradiance conditions. Based on these conditions, the tracking time of these presented techniques improves to 2 msec, with an efficiency of 100%.

  11. Improving maximum power point tracking of partially shaded photovoltaic system by using IPSO-BELBIC

    Science.gov (United States)

    Al-Alim El-Garhy, M. Abd; Mubarak, R. I.; El-Bably, M.

    2017-08-01

    Solar photovoltaic (PV) arrays in remote applications are often related to the rapid changes in the partial shading pattern. Rapid changes of the partial shading pattern make the tracking of maximum power point (MPP) of the global peak through the local ones too difficult. An essential need to make a fast and efficient algorithm to detect the peaks values which always vary as the sun irradiance changes. This paper presents two algorithms based on the improved particle swarm optimization technique one of them with PID controller (IPSO-PID), and the other one with Brain Emotional Learning Based Intelligent Controller (IPSO-BELBIC). These techniques improve the maximum power point (MPP) tracking capabilities for photovoltaic (PV) system under partial shading circumstances. The main aim of these improved algorithms is to accelerate the velocity of IPSO to reach to (MPP) and increase its efficiency. These algorithms also improve the tracking time under complex irradiance conditions. Based on these conditions, the tracking time of these presented techniques improves to 2 msec, with an efficiency of 100%.

  12. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  13. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  14. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  15. Reliability of one-repetition maximum performance in people with chronic heart failure.

    Science.gov (United States)

    Ellis, Rachel; Holland, Anne E; Dodd, Karen; Shields, Nora

    2018-02-24

    Evaluate intra-rater and inter-rater reliability of the one-repetition maximum strength test in people with chronic heart failure. Intra-rater and inter-rater reliability study. A public tertiary hospital in northern metropolitan Melbourne. Twenty-four participants (nine female, mean age 71.8 ± 13.1 years) with mild to moderate heart failure of any aetiology. Lower limb strength was assessed by determining the maximum weight that could be lifted using a leg press. Intra-rater reliability was tested by one assessor on two separate occasions . Inter-rater reliability was tested by two assessors in random order. Intra-class correlation coefficients and 95% confidence intervals were calculated. Bland and Altman analyses were also conducted, including calculation of mean differences between measures ([Formula: see text]) and limits of agreement . Ten intra-rater and 21 inter-rater assessments were completed. Excellent intra-rater (intra-class correlation coefficient 2,1 0.96) and inter-rater (intra-class correlation coefficient 2,1 0.93) reliability was found. Intra-rater assessment showed less variability (mean difference 4.5 kg, limits of agreement -8.11 to 17.11 kg) than inter-rater agreement (mean difference -3.81 kg, limits of agreement -23.39 to 15.77 kg). One-repetition maximum determined using a leg press is a reliable measure in people with heart failure. Given its smaller limits of agreement, intra-rater testing is recommended. Implications for Rehabilitation Using a leg press to determine a one-repetition maximum we were able to demonstrate excellent inter-rater and intra-rater reliability using an intra-class correlation coefficient. The Bland and Altman levels of agreement were wide for inter-rater reliability and so we recommend using one assessor if measuring change in strength within an individual over time.

  16. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  17. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  18. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  19. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  20. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  1. Mass mortality of the vermetid gastropod Ceraesignum maximum

    Science.gov (United States)

    Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.

    2016-09-01

    Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.

  2. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  3. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  4. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  5. 10 CFR 433.4 - Energy efficiency performance standard.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Energy efficiency performance standard. 433.4 Section 433.4 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY STANDARDS FOR THE DESIGN AND... consumption level at or better than the maximum level of energy efficiency that is life-cycle cost-effective...

  6. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  7. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  8. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  9. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  10. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  11. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  12. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  13. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  14. Maximum power point tracker for portable photovoltaic systems with resistive-like load

    Energy Technology Data Exchange (ETDEWEB)

    De Cesare, G.; Caputo, D.; Nascetti, A. [Department of Electronic Engineering, University of Rome La Sapienza via Eudossiana, 18 00184 Rome (Italy)

    2006-08-15

    In this work we report on the design and realization of a maximum power point tracking (MPPT) circuit suitable for low power, portable applications with resistive load. The design rules included cost, size and power efficiency considerations. A novel scheme for the implementation of the control loop of the MPPT circuit is proposed, combining good performance with compact design. The operation and performances were simulated at circuit schematic level with simulation program with integrated circuit emphasis (SPICE). The improved operation of a PV system using our MPPT circuit was demonstrated using a purely resistive load. (author)

  15. Double-tailored nonimaging reflector optics for maximum-performance solar concentration.

    Science.gov (United States)

    Goldstein, Alex; Gordon, Jeffrey M

    2010-09-01

    A nonimaging strategy that tailors two mirror contours for concentration near the étendue limit is explored, prompted by solar applications where a sizable gap between the optic and absorber is required. Subtle limitations of this simultaneous multiple surface method approach are derived, rooted in the manner in which phase space boundaries can be tailored according to the edge-ray principle. The fundamental categories of double-tailored reflective optics are identified, only a minority of which can pragmatically offer maximum concentration at high collection efficiency. Illustrative examples confirm that acceptance half-angles as large as 30 mrad can be realized at a flux concentration of approximately 1000.

  16. The unfolding of NaI(Tl) γ-ray spectrum based on maximum likelihood method

    International Nuclear Information System (INIS)

    Zhang Qingxian; Ge Liangquan; Gu Yi; Zeng Guoqiang; Lin Yanchang; Wang Guangxi

    2011-01-01

    NaI(Tl) detectors, having a good detection efficiency, are used to detect gamma rays in field surveys. But the poor energy resolution hinders their applications, despite the use of traditional methods to resolve the overlapped gamma-ray peaks. In this paper, the maximum likelihood (ML) solution is used to resolve the spectrum. The ML method,which is capable of decomposing the peaks in energy difference of over 2/3 FWHM, is applied to scale NaI(Tl) the spectrometer. The result shows that the net area is in proportion to the content of isotopes and the precision of scaling is better than the stripping ration method. (authors)

  17. Stochastic Evaluation of Maximum Wind Installation in a Radial Distribution Network

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Bak-Jensen, Birgitte; Chen, Zhe

    2011-01-01

    This paper proposes an optimization algorithm to find the maximum wind installation in a radial distribution network. The algorithm imposes a limit on the amount of wind energy that can be curtailed annually. The algorithm implements the wind turbine reactive power control and wind energy...... curtailment using sensitivity factors. The optimization is integrated with Monte Carlo simulation to account for the stochastic behavior of load demand and wind power generation. The proposed algorithm is tested on a real 20 kV Danish distribution system in Støvring. It is demonstrated that the algorithm...... executes reactive compensation and energy curtailment sequentially in an effective and efficient manner....

  18. MAXIMUM RUNOFF OF THE FLOOD ON WADIS OF NORTHERN ...

    African Journals Online (AJOL)

    lanez

    The technique of account the maximal runoff of flood for the rivers of northern part of Algeria based on the theory of ... north to south: 1) coastal Tel – fertile, high cultivated and sown zone; 2) territory of Atlas. Mountains ... In the first case the empiric dependence between maximum intensity of precipitation for some calculation ...

  19. Scientific substantination of maximum allowable concentration of fluopicolide in water

    Directory of Open Access Journals (Sweden)

    Pelo I.М.

    2014-03-01

    Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.

  20. Image coding based on maximum entropy partitioning for identifying ...

    Indian Academy of Sciences (India)

    A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...