WorldWideScience

Sample records for voltage scaling algorithm

  1. Current constrained voltage scaled reconstruction (CCVSR) algorithm for MR-EIT and its performance with different probing current patterns

    International Nuclear Information System (INIS)

    Birguel, Oezlem; Eyueboglu, B Murat; Ider, Y Ziya

    2003-01-01

    Conventional injected-current electrical impedance tomography (EIT) and magnetic resonance imaging (MRI) techniques can be combined to reconstruct high resolution true conductivity images. The magnetic flux density distribution generated by the internal current density distribution is extracted from MR phase images. This information is used to form a fine detailed conductivity image using an Ohm's law based update equation. The reconstructed conductivity image is assumed to differ from the true image by a scale factor. EIT surface potential measurements are then used to scale the reconstructed image in order to find the true conductivity values. This process is iterated until a stopping criterion is met. Several simulations are carried out for opposite and cosine current injection patterns to select the best current injection pattern for a 2D thorax model. The contrast resolution and accuracy of the proposed algorithm are also studied. In all simulation studies, realistic noise models for voltage and magnetic flux density measurements are used. It is shown that, in contrast to the conventional EIT techniques, the proposed method has the capability of reconstructing conductivity images with uniform and high spatial resolution. The spatial resolution is limited by the larger element size of the finite element mesh and twice the magnetic resonance image pixel size

  2. Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors

    OpenAIRE

    Okuma, Takanori; Yasuura, Hiroto

    2001-01-01

    This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...

  3. MPPT algorithm for voltage controlled PV inverters

    DEFF Research Database (Denmark)

    Kerekes, Tamas; Teodorescu, Remus; Liserre, Marco

    2008-01-01

    This paper presents a novel concept for an MPPT that can be used in case of a voltage controlled grid connected PV inverters. In case of single-phase systems, the 100 Hz ripple in the AC power is also present on the DC side. Depending on the DC link capacitor, this power fluctuation can be used t...... to track the MPP of the PV array, using the information that at MPP the power oscillations are very small. In this way the algorithm can detect the fact that the current working point is at the MPP, for the current atmospheric conditions....

  4. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  5. Extension algorithm for generic low-voltage networks

    Science.gov (United States)

    Marwitz, S.; Olk, C.

    2018-02-01

    Distributed energy resources (DERs) are increasingly penetrating the energy system which is driven by climate and sustainability goals. These technologies are mostly connected to low- voltage electrical networks and change the demand and supply situation in these networks. This can cause critical network states. Network topologies vary significantly and depend on several conditions including geography, historical development, network design or number of network connections. In the past, only some of these aspects were taken into account when estimating the network investment needs for Germany on the low-voltage level. Typically, fixed network topologies are examined or a Monte Carlo approach is used to quantify the investment needs at this voltage level. Recent research has revealed that DERs differ substantially between rural, suburban and urban regions. The low-voltage network topologies have different design concepts in these regions, so that different network topologies have to be considered when assessing the need for network extensions and investments due to DERs. An extension algorithm is needed to calculate network extensions and investment needs for the different typologies of generic low-voltage networks. We therefore present a new algorithm, which is capable of calculating the extension for generic low-voltage networks of any given topology based on voltage range deviations and thermal overloads. The algorithm requires information about line and cable lengths, their topology and the network state only. We test the algorithm on a radial, a loop, and a heavily meshed network. Here we show that the algorithm functions for electrical networks with these topologies. We found that the algorithm is able to extend different networks efficiently by placing cables between network nodes. The main value of the algorithm is that it does not require any information about routes for additional cables or positions for additional substations when it comes to estimating

  6. Reactive power dispatch considering voltage stability with seeker optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Chaohua; Chen, Weirong; Zhang, Xuexia [The School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031 (China); Zhu, Yunfang [Department of Computer and Communication Engineering, E' mei Campus, Southwest Jiaotong University, E' mei 614202 (China)

    2009-10-15

    Optimal reactive power dispatch (ORPD) has a growing impact on secure and economical operation of power systems. This issue is well known as a non-linear, multi-modal and multi-objective optimization problem where global optimization techniques are required in order to avoid local minima. In the last decades, computation intelligence-based techniques such as genetic algorithms (GAs), differential evolution (DE) algorithms and particle swarm optimization (PSO) algorithms, etc., have often been used for this aim. In this work, a seeker optimization algorithm (SOA) based method is proposed for ORPD considering static voltage stability and voltage deviation. The SOA is based on the concept of simulating the act of human searching where search direction is based on the empirical gradient by evaluating the response to the position changes and step length is based on uncertainty reasoning by using a simple Fuzzy rule. The algorithm's performance is studied with comparisons of two versions of GAs, three versions of DE algorithms and four versions of PSO algorithms on the IEEE 57 and 118-bus power systems. The simulation results show that the proposed approach performed better than the other listed algorithms and can be efficiently used for the ORPD problem. (author)

  7. Genetic algorithm based reactive power dispatch for voltage stability improvement

    Energy Technology Data Exchange (ETDEWEB)

    Devaraj, D. [Department of Electrical and Electronics, Kalasalingam University, Krishnankoil 626 190 (India); Roselyn, J. Preetha [Department of Electrical and Electronics, SRM University, Kattankulathur 603 203, Chennai (India)

    2010-12-15

    Voltage stability assessment and control form the core function in a modern energy control centre. This paper presents an improved Genetic algorithm (GA) approach for voltage stability enhancement. The proposed technique is based on the minimization of the maximum of L-indices of load buses. Generator voltages, switchable VAR sources and transformer tap changers are used as optimization variables of this problem. The proposed approach permits the optimization variables to be represented in their natural form in the genetic population. For effective genetic processing, the crossover and mutation operators which can directly deal with the floating point numbers and integers are used. The proposed algorithm has been tested on IEEE 30-bus and IEEE 57-bus test systems and successful results have been obtained. (author)

  8. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2016-01-01

    Full Text Available Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  9. Reduced Voltage Scaling in Clock Distribution Networks

    Directory of Open Access Journals (Sweden)

    Khader Mohammad

    2009-01-01

    Full Text Available We propose a novel circuit technique to generate a reduced voltage swing (RVS signals for active power reduction on main buses and clocks. This is achieved without performance degradation, without extra power supply requirement, and with minimum area overhead. The technique stops the discharge path on the net that is swinging low at a certain voltage value. It reduces active power on the target net by as much as 33% compared to traditional full swing signaling. The logic 0 voltage value is programmable through control bits. If desired, the reduced-swing mode can also be disabled. The approach assumes that the logic 0 voltage value is always less than the threshold voltage of the nMOS receivers, which eliminate the need of the low to high voltage translation. The reduced noise margin and the increased leakage on the receiver transistors using this approach have been addressed through the selective usage of multithreshold voltage (MTV devices and the programmability of the low voltage value.

  10. An algorithm for reduction of extracted power from photovoltaic strings in grid-tied photovoltaic power plants during voltage sags

    DEFF Research Database (Denmark)

    Tafti, Hossein Dehghani; Maswood, Ali Iftekhar; Pou, Josep

    2016-01-01

    strings should be reduced during voltage sags. In this paper, an algorithm is proposed for determining the reference voltage of the PV string which results in a reduction of the output power to a certain amount. The proposed algorithm calculates the reference voltage for the dc/dc converter controller......, based on the characteristics of the power-voltage curve of the PV string and therefore, no modification is required in the the controller of the dc/dc converter. Simulation results on a 50-kW PV string verified the effectiveness of the proposed algorithm in reducing the power from PV strings under......Due to the high penetration of the installed distributed generation units in the power system, the injection of reactive power is required for the medium-scale and large-scale grid-connected photovoltaic power plants (PVPPs). Because of the current limitation of the grid-connected inverter...

  11. A new way of estimating compute-boundedness and its application to dynamic voltage scaling

    DEFF Research Database (Denmark)

    Venkatachalam, Vasanth; Franz, Michael; Probst, Christian W.

    2007-01-01

    Many dynamic voltage scaling algorithms rely on measuring hardware events (such as cache misses) for predicting how much a workload can be slowed down with acceptable performance loss. The events measured, however, are at best indirectly related to execution time and clock frequency. By relating...... these two indicators logically, we propose a new way of predicting a workload's compute-boundedness that is based on direct observation, and only requires measuring the total execution cycles for the two highest clock frequencies. Our predictor can be used to develop dynamic voltage scaling algorithms...

  12. Square-Wave Voltage Injection Algorithm for PMSM Position Sensorless Control With High Robustness to Voltage Errors

    DEFF Research Database (Denmark)

    Ni, Ronggang; Xu, Dianguo; Blaabjerg, Frede

    2017-01-01

    relationship with the magnetic field distortion. Position estimation errors caused by higher order harmonic inductances and voltage harmonics generated by the SVPWM are also discussed. Both simulations and experiments are carried out based on a commercial PMSM to verify the superiority of the proposed method......Rotor position estimated with high-frequency (HF) voltage injection methods can be distorted by voltage errors due to inverter nonlinearities, motor resistance, and rotational voltage drops, etc. This paper proposes an improved HF square-wave voltage injection algorithm, which is robust to voltage...... errors without any compensations meanwhile has less fluctuation in the position estimation error. The average position estimation error is investigated based on the analysis of phase harmonic inductances, and deduced in the form of the phase shift of the second-order harmonic inductances to derive its...

  13. Reproducible and controllable induction voltage adder for scaled beam experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sakai, Yasuo; Nakajima, Mitsuo; Horioka, Kazuhiko [Department of Energy Sciences, Tokyo Institute of Technology, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502 (Japan)

    2016-08-15

    A reproducible and controllable induction adder was developed using solid-state switching devices and Finemet cores for scaled beam compression experiments. A gate controlled MOSFET circuit was developed for the controllable voltage driver. The MOSFET circuit drove the induction adder at low magnetization levels of the cores which enabled us to form reproducible modulation voltages with jitter less than 0.3 ns. Preliminary beam compression experiments indicated that the induction adder can improve the reproducibility of modulation voltages and advance the beam physics experiments.

  14. Energy reduction through voltage scaling and lightweight checking

    Science.gov (United States)

    Kadric, Edin

    As the semiconductor roadmap reaches smaller feature sizes and the end of Dennard Scaling, design goals change, and managing the power envelope often dominates delay minimization. Voltage scaling remains a powerful tool to reduce energy. We find that it results in about 60% geomean energy reduction on top of other common low-energy optimizations with 22nm CMOS technology. However, when voltage is reduced, it becomes easier for noise and particle strikes to upset a node, potentially causing Silent Data Corruption (SDC). The 60% energy reduction, therefore, comes with a significant drop in reliability. Duplication with checking and triple-modular redundancy are traditional approaches used to combat transient errors, but spending 2--3x the energy for redundant computation can diminish or reverse the benefits of voltage scaling. As an alternative, we explore the opportunity to use checking operations that are cheaper than the base computation they are guarding. We devise a classification system for applications and their lightweight checking characteristics. In particular, we identify and evaluate the effectiveness of lightweight checks in a broad set of common tasks in scientific computing and signal processing. We find that the lightweight checks cost only a fraction of the base computation (0-25%) and allow us to recover the reliability losses from voltage scaling. Overall, we show about 50% net energy reduction without compromising reliability compared to operation at the nominal voltage. We use FPGAs (Field-Programmable Gate Arrays) in our work, although the same ideas can be applied to different systems. On top of voltage scaling, we explore other common low-energy techniques for FPGAs: transmission gates, gate boosting, power gating, low-leakage (high-Vth) processes, and dual-V dd architectures. We do not scale voltage for memories, so lower voltages help us reduce logic and interconnect energy, but not memory energy. At lower voltages, memories become dominant

  15. The Effect of Swarming on a Voltage Potential-Based Conflict Resolution Algorithm

    NARCIS (Netherlands)

    Maas, J.B.; Sunil, E.; Ellerbroek, J.; Hoekstra, J.M.; Tra, M.A.P.

    2016-01-01

    Several conflict resolution algorithms for airborne self-separation rely on principles derived from the repulsive forces that exist between similarly charged particles. This research investigates whether the performance of the Modified Voltage Potential algorithm, which is based on this algorithm,

  16. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    Science.gov (United States)

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  17. Scaling Sparse Matrices for Optimization Algorithms

    OpenAIRE

    Gajulapalli Ravindra S; Lasdon Leon S

    2006-01-01

    To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performan...

  18. A new algorithm for optimum voltage and reactive power control for minimizing transmission lines losses

    International Nuclear Information System (INIS)

    Ghoudjehbaklou, H.; Danai, B.

    2001-01-01

    Reactive power dispatch for voltage profile modification has been of interest to power utilities. Usually local bus voltages can be altered by changing generator voltages, reactive shunts, ULTC transformers and SVCs. Determination of optimum values for control parameters, however, is not simple for modern power system networks. Heuristic and rather intelligent algorithms have to be sought. In this paper a new algorithm is proposed that is based on a variant of a genetic algorithm combined with simulated annealing updates. In this algorithm a fuzzy multi-objective a approach is used for the fitness function of the genetic algorithm. This fuzzy multi-objective function can efficiently modify the voltage profile in order to minimize transmission lines losses, thus reducing the operating costs. The reason for such a combination is to utilize the best characteristics of each method and overcome their deficiencies. The proposed algorithm is much faster than the classical genetic algorithm and cna be easily integrated into existing power utilities software. The proposed algorithm is tested on an actual system model of 1284 buses, 799 lines, 1175 fixed and ULTC transformers, 86 generators, 181 controllable shunts and 425 loads

  19. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  20. Linear scaling of density functional algorithms

    International Nuclear Information System (INIS)

    Stechel, E.B.; Feibelman, P.J.; Williams, A.R.

    1993-01-01

    An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm

  1. Eliminating harmonics in line to line voltage using genetic algorithm using multilevel inverter

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, R. [Excel College of Engineering and Technology, Komarapalayam (India). Electrical and Electronics Engineering; Karthikeyan, C. [K.S. Rangasamy College of Engineering, Tamil Nadu (India). Electrical and Electronics Engineering

    2017-04-15

    In this project the total harmonic distortion (THD) minimization of the multilevel inverters output voltage is discussed. The approach in reducing harmonics contents in inverters output voltage is THD elimination. The switching angles are varied with the fundamental frequency so the output THD is minimized. In three phase applications, the line voltage harmonics are of the main concern from the load point of view. Using a genetic algorithm, a THD minimization process is directly applied to the line to line voltage of the inverter. Genetic (GA) algorithm allows the determination of the optimized parameters and consequently an optimal operating point of the circuit and a wide pass band with a unity gain is obtained.

  2. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  3. One Terminal Digital Algorithm for Adaptive Single Pole Auto-Reclosing Based on Zero Sequence Voltage

    Directory of Open Access Journals (Sweden)

    S. Jamali

    2008-10-01

    Full Text Available This paper presents an algorithm for adaptive determination of the dead timeduring transient arcing faults and blocking automatic reclosing during permanent faults onoverhead transmission lines. The discrimination between transient and permanent faults ismade by the zero sequence voltage measured at the relay point. If the fault is recognised asan arcing one, then the third harmonic of the zero sequence voltage is used to evaluate theextinction time of the secondary arc and to initiate reclosing signal. The significantadvantage of this algorithm is that it uses an adaptive threshold level and therefore itsperformance is independent of fault location, line parameters and the system operatingconditions. The proposed algorithm has been successfully tested under a variety of faultlocations and load angles on a 400KV overhead line using Electro-Magnetic TransientProgram (EMTP. The test results validate the algorithm ability in determining thesecondary arc extinction time during transient faults as well as blocking unsuccessfulautomatic reclosing during permanent faults.

  4. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    Science.gov (United States)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  5. Special Issue on Time Scale Algorithms

    Science.gov (United States)

    2008-01-01

    unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 IOP PUBLISHING METROLOGIA Metrologia 45 (2008) doi:10.1088/0026-1394/45/6/E01...special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the...scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation’s high

  6. Optimizing real power loss and voltage stability limit of a large transmission network using firefly algorithm

    Directory of Open Access Journals (Sweden)

    P. Balachennaiah

    2016-06-01

    Full Text Available This paper proposes a Firefly algorithm based technique to optimize the control variables for simultaneous optimization of real power loss and voltage stability limit of the transmission system. Mathematically, this issue can be formulated as nonlinear equality and inequality constrained optimization problem with an objective function integrating both real power loss and voltage stability limit. Transformers taps, unified power flow controller and its parameters have been included as control variables in the problem formulation. The effectiveness of the proposed algorithm has been tested on New England 39-bus system. Simulation results obtained with the proposed algorithm are compared with the real coded genetic algorithm for single objective of real power loss minimization and multi-objective of real power loss minimization and voltage stability limit maximization. Also, a classical optimization method known as interior point successive linear programming technique is considered here to compare the results of firefly algorithm for single objective of real power loss minimization. Simulation results confirm the potentiality of the proposed algorithm in solving optimization problems.

  7. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  8. A new home energy management algorithm with voltage control in a smart home environment

    International Nuclear Information System (INIS)

    Elma, Onur; Selamogullari, Ugur Savas

    2015-01-01

    Energy management in electrical systems is one of the important issues for energy efficiency and future grid systems. Energy management is defined as a HEM (home energy management) on the residential consumer side. The HEM system plays a key role in residential demand response applications. In this study, a new HEM algorithm is proposed for smart home environments to reduce peak demand and increase the energy efficiency. The proposed algorithm includes VC (voltage control) methodology to reduce the power consumption of residential appliances so that the shifting of appliances is minimized. The results of the survey are used to produce representative load profiles for a weekday and for a weekend. Then, case studies are completed to test the proposed HEM algorithm in reducing the peak demand in the house. The main aim of the proposed HEM algorithm is to minimize the number of turned-off appliances to decrease demand so that the customer comfort is maximized. The smart home laboratory at Yildiz Technical University, Istanbul, Turkey is used in case studies. Experimental results show that the proposed HEM algorithm reduces the peak demand by 17.5% with the voltage control and by 38% with both the voltage control and the appliance shifting. - Highlights: • A new HEM (home energy management) algorithm is proposed. • Voltage control in the HEM is introduced as a solution for peak load reduction. • Customer comfort is maximized by minimizing the number of turned-off appliances. • The proposed HEM algorithm is experimentally validated at a smart home laboratory. • A survey is completed to produce typical load profiles of a Turkish family.

  9. Actuator Location and Voltages Optimization for Shape Control of Smart Beams Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Georgios E. Stavroulakis

    2013-10-01

    Full Text Available This paper presents a numerical study on optimal voltages and optimal placement of piezoelectric actuators for shape control of beam structures. A finite element model, based on Timoshenko beam theory, is developed to characterize the behavior of the structure and the actuators. This model accounted for the electromechanical coupling in the entire beam structure, due to the fact that the piezoelectric layers are treated as constituent parts of the entire structural system. A hybrid scheme is presented based on great deluge and genetic algorithm. The hybrid algorithm is implemented to calculate the optimal locations and optimal values of voltages, applied to the piezoelectric actuators glued in the structure, which minimize the error between the achieved and the desired shape. Results from numerical simulations demonstrate the capabilities and efficiency of the developed optimization algorithm in both clamped−free and clamped−clamped beam problems are presented.

  10. Extreme-scale Algorithms and Solver Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States)

    2016-12-10

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs, etc.); and Conflicting goals of performance, resilience, and power requirements.

  11. A novel current mode controller for a static compensator utilizing Goertzel algorithm to mitigate voltage sags

    International Nuclear Information System (INIS)

    Najafi, E.; Yatim, A.H.M.

    2011-01-01

    Research highlights: → We proposed a new current control method for STATCOM. → The current control method maintains a fixed switching frequency. → It also produces fewer harmonics compared to conventional hysteresis method. → A new voltage dip (sag) detection method was used in STATCOM. → The control method can mitigate voltage sag in each phase separately. -- Abstract: Static compensator (STATCOM) has been widely proposed for power quality and network stability improvement. It is easily connected in parallel to the electric network and has many advantages for electrical grids. It can improve network stability; power factor, power transfer rating and can avoid some disturbances such as sags and swells. Most of STATCOM controllers are based on voltage controllers that are based on balanced d-q transform. However, they are not thorough solutions for network disturbances since in most cases single-phase disturbances occur in electrical networks that cannot be avoided by the conventional controllers. Voltage mode controllers are also not capable of responding fast enough to the changes expected of a network system. This paper proposes a new current mode controller to overcome the mentioned problem. The approach uses a fixed frequency current controller to maintain voltage levels in voltage sags (dips). This approach is also simple and can be easily implemented by digitally. It has superior performance over conventional methods in terms of harmonic reduction in STATCOM output current. Another important factor for STATCOM effectiveness in sag mitigation is its sag detection method. This paper also introduces a new sag detection method based on Goertzel algorithm which is both effective and simple for practical applications. The simulation results presented illustrate the superiority of the proposed controller and sag detection algorithm to be utilized in the STATCOM.

  12. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    Science.gov (United States)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  13. DC Voltage Droop Control Implementation in the AC/DC Power Flow Algorithm: Combinational Approach

    DEFF Research Database (Denmark)

    Akhter, F.; Macpherson, D.E.; Harrison, G.P.

    2015-01-01

    of operational flexibility, as more than one VSC station controls the DC link voltage of the MTDC system. This model enables the study of the effects of DC droop control on the power flows of the combined AC/DC system for steady state studies after VSC station outages or transient conditions without needing...... to use its complete dynamic model. Further, the proposed approach can be extended to include multiple AC and DC grids for combined AC/DC power flow analysis. The algorithm is implemented by modifying the MATPOWER based MATACDC program and the results shows that the algorithm works efficiently....

  14. EDITORIAL: Special issue on time scale algorithms

    Science.gov (United States)

    Matsakis, Demetrios; Tavella, Patrizia

    2008-12-01

    This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than

  15. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  16. DISTRIBUTION NETWORK RECONFIGURATION FOR POWER LOSS MINIMIZATION AND VOLTAGE PROFILE ENHANCEMENT USING ANT LION ALGORITHM

    Directory of Open Access Journals (Sweden)

    Maryam Shokouhi

    2017-06-01

    Full Text Available Distribution networks are designed as a ring and operated as a radial form. Therefore, the reconfiguration is a simple and cost-effective way to use existing facilities without the need for any new equipment in distribution networks to achieve various objectives such as: power loss reduction, feeder overload reduction, load balancing, voltage profile improvement, reducing the number of switching considering constraints that ultimately result in the power loss reduction. In this paper, a new method based on the Ant Lion algorithm (a modern meta-heuristic algorithm is provided for the reconfiguration of distribution networks. Considering the extension of the distribution networks and complexity of their communications networks, and the various parameters, using smart techniques is inevitable. The proposed approach is tested on the IEEE 33 & 69-bus radial standard distribution networks. The Evaluation of results in MATLAB software shows the effectiveness of the Ant Lion algorithm in the distribution network reconfiguration.

  17. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  18. Dynamic Uniform Scaling for Multiobjective Genetic Algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf; Goldberg, David E.

    2004-01-01

    Before Multiobjective Evolutionary Algorithms (MOEAs) can be used as a widespread tool for solving arbitrary real world problems there are some salient issues which require further investigation. One of these issues is how a uniform distribution of solutions along the Pareto non-dominated front c...

  19. Dynamic Uniform Scaling for Multiobjective Genetic Algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf; Goldberg, D.E.

    2004-01-01

    Before Multiobjective Evolutionary Algorithms (MOEAs) can be used as a widespread tool for solving arbitrary real world problems there are some salient issues which require further investigation. One of these issues is how a uniform distribution of solutions along the Pareto non-dominated front can...

  20. Proportional-Type Performance Recovery DC-Link Voltage Tracking Algorithm for Permanent Magnet Synchronous Generators

    Directory of Open Access Journals (Sweden)

    Seok-Kyoon Kim

    2017-09-01

    Full Text Available This study proposes a disturbance observer-based proportional-type DC-link voltage tracking algorithm for permanent magnet synchronous generators (PMSGs. The proposed technique feedbacks the only proportional term of the tracking errors, and it contains the nominal static and dynamic feed-forward compensators coming from the first-order disturbance observers. It is rigorously proved that the proposed method ensures the performance recovery and offset-free properties without the use of the integrators of the tracking errors. A wind power generation system has been simulated to verify the efficacy of the proposed method using the PSIM (PowerSIM software with the DLL (Dynamic Link Library block.

  1. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  2. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhou, Xinyang [University of Colorado; Liu, Zhiyuan [University of Colorado; Chen, Lijun [University of Colorado

    2017-10-03

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together with pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.

  3. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  4. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  5. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    Science.gov (United States)

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  6. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    Directory of Open Access Journals (Sweden)

    Ling Ai Wong

    2014-01-01

    Full Text Available This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  7. Composable Dynamic Voltage and Frequency Scaling and Power Management for Dataflow Applications

    NARCIS (Netherlands)

    Goossens, Kees; She, Dongrui; Milutinovic, A.; Molnos, Anca; Lopez, S.

    2010-01-01

    Composability means that the behaviour of an application, including its timing, is not affected by the absence or presence of other applications. It is required to be able to design, test, and verify applications independently. In this paper we de﬿ne composable dynamic voltage and frequency scaling

  8. Composable dynamic voltage and frequency scaling and power management for dataflow applications

    NARCIS (Netherlands)

    Goossens, K.G.W.; She, D.; Milutinovic, A.; Molnos, A.M.

    2010-01-01

    Composability means that the behaviour of an application, including its timing, is not affected by the absence or presence of other applications. It is required to be able to design, test, and verify applications independently. In this paper we define composable dynamic voltage and frequency scaling

  9. HMC algorithm with multiple time scale integration and mass preconditioning

    Science.gov (United States)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  10. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  11. Regulation of Voltage and Frequency in Solid Oxide Fuel Cell-Based Autonomous Microgrids Using the Whales Optimisation Algorithm

    Directory of Open Access Journals (Sweden)

    Sajid Hussain Qazi

    2018-05-01

    Full Text Available This study explores the Whales Optimization Algorithm (WOA-based PI controller for regulating the voltage and frequency of an inverter-based autonomous microgrid (MG. The MG comprises two 50 kW DGs (solid oxide fuel cells, SOFCs interfaced using a power electronics-based voltage source inverter (VSI with a 120-kV conventional grid. Four PI controller schemes for the MG are implemented: (i stationary PI controller with fixed gain values (Kp and Ki, (ii PSO tuned PI controller, (iii GWO tuned PI controller, and (iv WOA tuned PI controller. The performance of these controllers is evaluated by monitoring the system voltage and frequency during the transition of MG operation mode and changes in the load. The MATLAB/SIMULINK tool is utilised to design the proposed model of grid-tied MG alongside the MATLAB m-file to apply an optimisation technique. The simulation results show that the WOA-based PI controller which optimises the control parameters, achieve 62.7% and 59% better results for voltage and frequency regulation, respectively. The eigenvalue analysis is also provided to check the stability of the proposed controller. Furthermore, the proposed system also satisfies the limits specified in IEEE-1547-2003 for voltage and frequency.

  12. Power dithering algorithm to avoid the overcoming of the voltage limit in presence of DG on distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Calderaro, V.; Coppola, V.; Galdi, V.; Piccolo, A. [Salerno Univ., Fisciano (Italy). Dept. of Information System Engineering and Electrical Engineering

    2008-07-01

    A new model of power distribution system has emerged in recent years in response to new generation technologies involving mini- and micro-generators that can be directly connected to medium voltage (MV) or low voltage (LV) power grids. The locations of these dispersed generators (DGs) are typically based on the availability of primary energy resources or on the specific needs of users. The increasing use of DGs causes new problems in terms of distribution network management and planning, with effect on the power quality, voltage profile or protection aspects. One of the problems arising on MV/LV distribution network, especially in weak rural areas, is related to the bus overvoltage at the point of common coupling (PCC). Therefore, this study proposed an approach to power control of the single generator that maximizes the active power injected on the network by DG, avoiding the trip of the minimum and maximum voltage protection installed at the PCC. Overvoltage typically occurs due to the injection of a large amount of power from unschedulable DG and a small power demand by the loads. This can trip overvoltage protection relays of DGs, and disconnect them from the grid. The local control strategy for DG systems proposed in this paper was based on the dithering algorithm. The proposed solution, operating on the electronic interface of the power generator, introduces or absorbs reactive power if the voltage at PCC is close to the limits, thus increasing the total active power injected by renewable sources. 17 refs., 3 tabs., 12 figs.

  13. Optimal power and performance trade-offs for dynamic voltage scaling in power management based wireless sensor node

    Directory of Open Access Journals (Sweden)

    Anuradha Pughat

    2016-09-01

    Full Text Available Dynamic voltage scaling contributes to a significant amount of power saving, especially in the energy constrained wireless sensor networks (WSNs. Existing dynamic voltage scaling techniques make the system slower and ignore the event miss rate. This results in degradation of the system performance when there is non-stationary workload at input. The overhead due to transition between voltage level and discrete voltage levels are also the limitations of available dynamic voltage scaling (DVS techniques at sensor node (SN. This paper proposes a workload dependent DVS based MSP430 controller model used for SN. An online gradient estimation technique has been used to optimize power and performance trade-offs. The analytical results are validated with the simulation results obtained using simulation tool “SimEvents” and compared with the available AT9OS8535 controller. Based on the stochastic workload, the controller's input voltage, operational frequency, utilization, and average wait time of events are obtained.

  14. Voltage equalization of an ultracapacitor module by cell grouping using number partitioning algorithm

    Science.gov (United States)

    Oyarbide, E.; Bernal, C.; Molina, P.; Jiménez, L. A.; Gálvez, R.; Martínez, A.

    2016-01-01

    Ultracapacitors are low voltage devices and therefore, for practical applications, they need to be used in modules of series-connected cells. Because of the inherent manufacturing tolerance of the capacitance parameter of each cell, and as the maximum voltage value cannot be exceeded, the module requires inter-cell voltage equalization. If the intended application suffers repeated fast charging/discharging cycles, active equalization circuits must be rated to full power, and thus the module becomes expensive. Previous work shows that a series connection of several sets of paralleled ultracapacitors minimizes the dispersion of equivalent capacitance values, and also the voltage differences between capacitors. Thus the overall life expectancy is improved. This paper proposes a method to distribute ultracapacitors with a number partitioning-based strategy to reduce the dispersion between equivalent submodule capacitances. Thereafter, the total amount of stored energy and/or the life expectancy of the device can be considerably improved.

  15. Low-power operation using self-timed circuits and adaptive scaling of the supply voltage

    DEFF Research Database (Denmark)

    Nielsen, Lars Skovby; Niessen, C.; Sparsø, Jens

    1994-01-01

    Recent research has demonstrated that for certain types of applications like sampled audio systems, self-timed circuits can achieve very low power consumption, because unused circuit parts automatically turn into a stand-by mode. Additional savings may be obtained by combining the self......-timed circuits with a mechanism that adaptively adjusts the supply voltage to the smallest possible, while maintaining the performance requirements. This paper describes such a mechanism, analyzes the possible power savings, and presents a demonstrator chip that has been fabricated and tested. The idea...... of voltage scaling has been used previously in synchronous circuits, and the contributions of the present paper are: 1) the combination of supply scaling and self-timed circuitry which has some unique advantages, and 2) the thorough analysis of the power savings that are possible using this technique.>...

  16. Non-equilibrium scaling analysis of the Kondo model with voltage bias

    International Nuclear Information System (INIS)

    Fritsch, Peter; Kehrein, Stefan

    2009-01-01

    The quintessential description of Kondo physics in equilibrium is obtained within a scaling picture that shows the buildup of Kondo screening at low temperature. For the non-equilibrium Kondo model with a voltage bias, the key new feature are decoherence effects due to the current across the impurity. In the present paper, we show how one can develop a consistent framework for studying the non-equilibrium Kondo model within a scaling picture of infinitesimal unitary transformations (flow equations). Decoherence effects appear naturally in third order of the β-function and dominate the Hamiltonian flow for sufficiently large voltage bias. We work out the spin dynamics in non-equilibrium and compare it with finite temperature equilibrium results. In particular, we report on the behavior of the static spin susceptibility including leading logarithmic corrections and compare it with the celebrated equilibrium result as a function of temperature.

  17. GENETIC ALGORITHM BASED SOLUTION IN PWM CONVERTER SWITCHING FOR VOLTAGE SOURCE INVERTER FEEDING AN INDUCTION MOTOR DRIVE

    Directory of Open Access Journals (Sweden)

    V. Jegathesan

    2017-11-01

    Full Text Available This paper presents an efficient and reliable Genetic Algorithm based solution for Selective Harmonic Elimination (SHE switching pattern. This method eliminates considerable amount of lower order line voltage harmonics in Pulse Width Modulation (PWM inverter. Determination of pulse pattern for the elimination of some lower order harmonics of a PWM inverter necessitates solving a system of nonlinear transcendental equations. Genetic Algorithm is used to solve nonlinear transcendental equations for PWM-SHE. Many methods are available to eliminate the higher order harmonics and it can be easily removed. But the greatest challenge is to eliminate the lower order harmonics and this is successfully achieved using Genetic Algorithm without using Dual transformer. Simulations using MATLABTM and Powersim with experimental results are carried out to validate the solution. The experimental results show that the harmonics up to 13th were totally eliminated.

  18. Reactive power and voltage control based on general quantum genetic algorithms

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John); Østergaard, Jacob

    2009-01-01

    This paper presents an improved evolutionary algorithm based on quantum computing for optima l steady-state performance of power systems. However, the proposed general quantum genetic algorithm (GQ-GA) can be applied in various combinatorial optimization problems. In this study the GQ-GA determines...... techniques such as enhanced GA, multi-objective evolutionary algorithm and particle swarm optimization algorithms, as well as the classical primal-dual interior-point optimal power flow algorithm. The comparison demonstrates the ability of the GQ-GA in reaching more optimal solutions....

  19. Stability Analysis of a Matrix Converter Drive: Effects of Input Filter Type and the Voltage Fed to the Modulation Algorithm

    Directory of Open Access Journals (Sweden)

    M. Hosseini Abardeh

    2015-03-01

    Full Text Available The matrix converter instability can cause a substantial distortion in the input currents and voltages which leads to the malfunction of the converter. This paper deals with the effects of input filter type, grid inductance, voltage fed to the modulation algorithm and the synchronous rotating digital filter time constant on the stability and performance of the matrix converter. The studies are carried out using eigenvalues of the linearized system and simulations. Two most common schemes for the input filter (LC and RLC are analyzed. It is shown that by a proper choice of voltage input to the modulation algorithm, structure of the input filter and its parameters, the need for the digital filter for ensuring the stability can be resolved. Moreover, a detailed model of the system considering the switching effects is simulated and the results are used to validate the analytical outcomes. The agreement between simulation and analytical results implies that the system performance is not deteriorated by neglecting the nonlinear switching behavior of the converter. Hence, the eigenvalue analysis of the linearized system can be a proper indicator of the system stability.

  20. A New Method for a Piezoelectric Energy Harvesting System Using a Backtracking Search Algorithm-Based PI Voltage Controller

    Directory of Open Access Journals (Sweden)

    Mahidur R. Sarker

    2016-09-01

    Full Text Available This paper presents a new method for a vibration-based piezoelectric energy harvesting system using a backtracking search algorithm (BSA-based proportional-integral (PI voltage controller. This technique eliminates the exhaustive conventional trial-and-error procedure for obtaining optimized parameter values of proportional gain (Kp, and integral gain (Ki for PI voltage controllers. The generated estimate values of Kp and Ki are executed in the PI voltage controller that is developed through the BSA optimization technique. In this study, mean absolute error (MAE is used as an objective function to minimize output error for a piezoelectric energy harvesting system (PEHS. The model for the PEHS is designed and analyzed using the BSA optimization technique. The BSA-based PI voltage controller of the PEHS produces a significant improvement in minimizing the output error of the converter and a robust, regulated pulse-width modulation (PWM signal to convert a MOSFET switch, with the best response in terms of rise time and settling time under various load conditions.

  1. Voltage Profile Enhancement and Reduction of Real Power loss by Hybrid Biogeography Based Artificial Bee Colony algorithm

    Directory of Open Access Journals (Sweden)

    K. Lenin

    2014-04-01

    Full Text Available This paper presents Hybrid Biogeography algorithm for solving the multi-objective reactive power dispatch problem in a power system. Real Power Loss minimization and maximization of voltage stability margin are taken as the objectives. Artificial bee colony optimization (ABC is quick and forceful algorithm for global optimization. Biogeography-Based Optimization (BBO is a new-fangled biogeography inspired algorithm. It mainly utilizes the biogeography-based relocation operator to share the information among solutions. In this work, a hybrid algorithm with BBO and ABC is projected, and named as HBBABC (Hybrid Biogeography based Artificial Bee Colony Optimization, for the universal numerical optimization problem. HBBABC merge the searching behavior of ABC with that of BBO. Both the algorithms have different solution probing tendency like ABC have good exploration probing tendency while BBO have good exploitation probing tendency.  HBBABC used to solve the reactive power dispatch problem and the proposed technique has been tested in standard IEEE30 bus test system.

  2. Distributed Voltage Unbalance Compensation in Islanded Microgrids by Using Dynamic-Consensus-Algorithm

    DEFF Research Database (Denmark)

    Meng, Lexuan; Zhao, Xin; Tang, Fen

    2016-01-01

    In islanded microgrids (MGs), distributed generators (DGs) can be employed as distributed compensators for improving the power quality in the consumer side. Two-level hierarchical control can be used for voltage unbalance compensation. Primary level, consisting of droop control and virtual...

  3. Scheduling and Voltage Scaling for Energy/Reliability Trade-offs in Fault-Tolerant Time-Triggered Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Poulsen, Kåre Harbo; Izosimov, Viacheslav

    2007-01-01

    -execution and dynamic voltage scaling-based low-power techniques are competing for the slack in the schedules. Our approach decides the voltage levels and start times of processes and the transmission times of messages, such that the transient faults are tolerated, the timing constraints of the application...

  4. Reduced scale PWR passive safety system designing by genetic algorithms

    International Nuclear Information System (INIS)

    Cunha, Joao J. da; Alvim, Antonio Carlos M.; Lapa, Celso Marcelo Franklin

    2007-01-01

    This paper presents the concept of 'Design by Genetic Algorithms (DbyGA)', applied to a new reduced scale system problem. The design problem of a passive thermal-hydraulic safety system, considering dimensional and operational constraints, has been solved. Taking into account the passive safety characteristics of the last nuclear reactor generation, a PWR core under natural circulation is used in order to demonstrate the methodology applicability. The results revealed that some solutions (reduced scale system DbyGA) are capable of reproducing, both accurately and simultaneously, much of the physical phenomena that occur in real scale and operating conditions. However, some aspects, revealed by studies of cases, pointed important possibilities to DbyGA methodological performance improvement

  5. Dynamic Consensus Algorithm based Distributed Voltage Harmonic Compensation in Islanded Microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Tang, Fen; Firoozabadi, Mehdi Savaghebi

    2015-01-01

    generators can be employed as compensators to enhance the power quality on consumer side. However, conventional centralized control is facing obstacles because of the distributed fashion of generation and consumption. Accordingly, this paper proposes a consensus algorithm based distributed hierarchical...

  6. Energy Management System with Equalization Algorithm for Distributed Energy Storage Systems in PV-Active Generator Based Low Voltage DC Microgrids

    DEFF Research Database (Denmark)

    Aldana, Nelson Leonardo Diaz; Hernández, Adriana Carolina Luna; Vasquez, Juan Carlos

    2015-01-01

    This paper presents a centralized strategy for equalizing the state of charge of distributed energy storage systems in an islanded DC microgrid. The proposed strategy is based on a simple algorithm called equalization algorithm, which modifies the charge or discharge rate by weighting the virtual...... results of a low voltage DC microgrid are presented in order to verify the performance of the proposed approach....

  7. Multi-Scale Parameter Identification of Lithium-Ion Battery Electric Models Using a PSO-LM Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-Jing Shen

    2017-03-01

    Full Text Available This paper proposes a multi-scale parameter identification algorithm for the lithium-ion battery (LIB electric model by using a combination of particle swarm optimization (PSO and Levenberg-Marquardt (LM algorithms. Two-dimensional Poisson equations with unknown parameters are used to describe the potential and current density distribution (PDD of the positive and negative electrodes in the LIB electric model. The model parameters are difficult to determine in the simulation due to the nonlinear complexity of the model. In the proposed identification algorithm, PSO is used for the coarse-scale parameter identification and the LM algorithm is applied for the fine-scale parameter identification. The experiment results show that the multi-scale identification not only improves the convergence rate and effectively escapes from the stagnation of PSO, but also overcomes the local minimum entrapment drawback of the LM algorithm. The terminal voltage curves from the PDD model with the identified parameter values are in good agreement with those from the experiments at different discharge/charge rates.

  8. An augmented Lagrangian multi-scale dictionary learning algorithm

    Directory of Open Access Journals (Sweden)

    Ye Meng

    2011-01-01

    Full Text Available Abstract Learning overcomplete dictionaries for sparse signal representation has become a hot topic fascinated by many researchers in the recent years, while most of the existing approaches have a serious problem that they always lead to local minima. In this article, we present a novel augmented Lagrangian multi-scale dictionary learning algorithm (ALM-DL, which is achieved by first recasting the constrained dictionary learning problem into an AL scheme, and then updating the dictionary after each inner iteration of the scheme during which majorization-minimization technique is employed for solving the inner subproblem. Refining the dictionary from low scale to high makes the proposed method less dependent on the initial dictionary hence avoiding local optima. Numerical tests for synthetic data and denoising applications on real images demonstrate the superior performance of the proposed approach.

  9. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems.......Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...

  10. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  11. Design of optimal input–output scaling factors based fuzzy PSS using bat algorithm

    Directory of Open Access Journals (Sweden)

    D.K. Sambariya

    2016-06-01

    Full Text Available In this article, a fuzzy logic based power system stabilizer (FPSS is designed by tuning its input–output scaling factors. Two input signals to FPSS are considered as change of speed and change in power, and the output signal is considered as a correcting voltage signal. The normalizing factors of these signals are considered as the optimization problem with minimization of integral of square error in single-machine and multi-machine power systems. These factors are optimally determined with bat algorithm (BA and considered as scaling factors of FPSS. The performance of power system with such a designed BA based FPSS (BA-FPSS is compared to that of response with FPSS, Harmony Search Algorithm based FPSS (HSA-FPSS and Particle Swarm Optimization based FPSS (PSO-FPSS. The systems considered are single-machine connected to infinite-bus, two-area 4-machine 10-bus and IEEE New England 10-machine 39-bus power systems for evaluating the performance of BA-FPSS. The comparison is carried out in terms of the integral of time-weighted absolute error (ITAE, integral of absolute error (IAE and integral of square error (ISE of speed response for systems with FPSS, HSA-FPSS and BA-FPSS. The superior performance of systems with BA-FPSS is established considering eight plant conditions of each system, which represents the wide range of operating conditions.

  12. Voltage stability issues in a distribution grid with large scale PV plant

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Alvaro Ruiz; Marinopoulos, Antonios; Reza, Muhamad; Srivastava, Kailash [ABB AB, Vaesteraas (Sweden). Corporate Research Center; Hertem, Dirk van [Katholieke Univ. Leuven, Heverlee (Belgium). ESAT-ELECTA

    2011-07-01

    Solar photovoltaics (PV) has become a competitive renewable energy source. The production of solar PV cells and panels has increased significantly, while the cost is reduced due to economics of scale and technological achievements in the field. At the same time, the increase in efficiency of PV power systems and high energy prices are expected to lead PV systems to grid parity in the coming decade. This is expected to boost even more the large scale implementation of PV power plants (utility scale PV) and therefore the impact of such large scale PV plants to power system needs to be studies. This paper investigates the voltage stability issues arising from the connection of a large PV power plant to the power grid. For this purpose, a 15 MW PV power plant was implemented into a distribution grid, modeled and simulated using DIgSILENT Power Factory. Two scenarios were developed: in the first scenario, active power injected into the grid by the PV power plants was varied and the resulted U-Q curve was analyzed. In the second scenario, the impact of connecting PV power plants to different points in the grid - resulting in different strength of the connection - was investigated. (orig.)

  13. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  14. Scale-Aware Pansharpening Algorithm for Agricultural Fragmented Landscapes

    Directory of Open Access Journals (Sweden)

    Mario Lillo-Saavedra

    2016-10-01

    Full Text Available Remote sensing (RS has played an important role in extensive agricultural monitoring and management for several decades. However, the current spatial resolution of satellite imagery does not have enough definition to generalize its use in highly-fragmented agricultural landscapes, which represents a significant percentage of the world’s total cultivated surface. To characterize and analyze this type of landscape, multispectral (MS images with high and very high spatial resolutions are required. Multi-source image fusion algorithms are normally used to improve the spatial resolution of images with a medium spatial resolution. In particular, pansharpening (PS methods allow one to produce high-resolution MS images through a coherent integration of spatial details from a panchromatic (PAN image with spectral information from an MS. The spectral and spatial quality of source images must be preserved to be useful in RS tasks. Different PS strategies provide different trade-offs between the spectral and the spatial quality of the fused images. Considering that agricultural landscape images contain many levels of significant structures and edges, the PS algorithms based on filtering processes must be scale-aware and able to remove different levels of detail in any input images. In this work, a new PS methodology based on a rolling guidance filter (RGF is proposed. The main contribution of this new methodology is to produce artifact-free pansharpened images, improving the MS edges with a scale-aware approach. Three images have been used, and more than 150 experiments were carried out. An objective comparison with widely-used methodologies shows the capability of the proposed method as a powerful tool to obtain pansharpened images preserving the spatial and spectral information.

  15. Multiple-Time-Scales Hierarchical Frequency Stability Control Strategy of Medium-Voltage Isolated Microgrid

    DEFF Research Database (Denmark)

    Zhao, Zhuoli; Yang, Ping; Guerrero, Josep M.

    2016-01-01

    In this paper, an islanded medium-voltage (MV) microgrid placed in Dongao Island is presented, which integrates renewable-energy-based distributed generations (DGs), energy storage system (ESS), and local loads. In an isolated microgrid without connection to the main grid to support the frequency......, it is more complex to control and manage. Thus in order to maintain the frequency stability in multiple-time-scales, a hierarchical control strategy is proposed. The proposed control architecture divides the system frequency in three zones: (A) stable zone, (B) precautionary zone and (C) emergency zone...... of Zone B. Theoretical analysis, time-domain simulation and field test results under various conditions and scenarios in the Dongao Island microgrid are presented to prove the validity of the introduced control strategy....

  16. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  17. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  18. The supply voltage scaled dependency of the recovery of single event upset in advanced complementary metal—oxide—semiconductor static random-access memory cells

    International Nuclear Information System (INIS)

    Li Da-Wei; Qin Jun-Rui; Chen Shu-Ming

    2013-01-01

    Using computer-aided design three-dimensional simulation technology, the supply voltage scaled dependency of the recovery of single event upset and charge collection in static random-access memory cells are investigated. It reveals that the recovery linear energy transfer threshold decreases with the supply voltage reducing, which is quite attractive for dynamic voltage scaling and subthreshold circuit radiation-hardened design. Additionally, the effect of supply voltage on charge collection is also investigated. It is concluded that the supply voltage mainly affects the bipolar gain of the parasitical bipolar junction transistor (BJT) and the existence of the source plays an important role in supply voltage variation. (geophysics, astronomy, and astrophysics)

  19. Progressive decrement PWM algorithm for minimum mean square error inverter output voltage

    International Nuclear Information System (INIS)

    Ghaeb, J.A.; Smadi, M.A.; Ababneh, M.

    2011-01-01

    Highlights: → The main contribution of this work is to provide a better performance for the power inverter operation. → The proposed technique splits the determined original pulse-width of an inverter operation in to many pulses. → The new approach extends the central pulse and shrinks the exterior pulses. → This is leading to an inverter output cycle close to the sinusoidal form of fewer harmonics. - Abstract: The paper proposes two modulation techniques for the power inverter. These new techniques are named progressive decrement PWM algorithm (PDPA) and progressive increment PWM algorithm (PIPA). Both techniques take the determined original pulse-width of an inverter operation and split it to many pulses. In the PDPA technique, the largest width is given to the middle pulse and the width of the boundary pulses is reduced progressively starting from the first boundary-pulse toward the last boundary-pulse. In the PIPA technique, there is a gradual increment instead of decrement. The two techniques have been proved that it can maintain the original pulse-width of the inverter operation. The new approach PDPA extends the central pulse and shrinks the exterior pulses, leading to an inverter output cycle close to the sinusoidal form of fewer harmonic contents. Simulation results are performed to evaluate the performances of the proposed techniques: PDPA and PIPA and to compare them with the well known methods. The main contribution of the proposed PDPA technique is that it provides a better performance for the most harmonic orders compared to the well established sinusoidal PWM technique.

  20. Scaling of Supply Voltage in Design of Energy Saver FIR Filter on 28nm FPGA

    DEFF Research Database (Denmark)

    Pandey, Bishwajeet; Jain, Vishal; Sharma, Rashmi

    2017-01-01

    In this work, we are going to analyze the effect of main supply voltage, auxiliary supply voltage, local voltage of different power bank, and supply voltage in GTX transceiver and BRAM on power dissipation of our FIR design using Verilog during implementation on 28nm FPGA. We have also taken three.......33%, 86%, 90.67%, 65.33%, 52%, and 48.67% reduction in IO power dissipation of FIR Filter design on CSG324 package of Artix-7 FPGA family....

  1. Induced Voltages Ratio-Based Algorithm for Fault Detection, and Faulted Phase and Winding Identification of a Three-Winding Power Transformer

    Directory of Open Access Journals (Sweden)

    Byung Eun Lee

    2014-09-01

    Full Text Available This paper proposes an algorithm for fault detection, faulted phase and winding identification of a three-winding power transformer based on the induced voltages in the electrical power system. The ratio of the induced voltages of the primary-secondary, primary-tertiary and secondary-tertiary windings is the same as the corresponding turns ratio during normal operating conditions, magnetic inrush, and over-excitation. It differs from the turns ratio during an internal fault. For a single phase and a three-phase power transformer with wye-connected windings, the induced voltages of each pair of windings are estimated. For a three-phase power transformer with delta-connected windings, the induced voltage differences are estimated to use the line currents, because the delta winding currents are practically unavailable. Six detectors are suggested for fault detection. An additional three detectors and a rule for faulted phase and winding identification are presented as well. The proposed algorithm can not only detect an internal fault, but also identify the faulted phase and winding of a three-winding power transformer. The various test results with Electromagnetic Transients Program (EMTP-generated data show that the proposed algorithm successfully discriminates internal faults from normal operating conditions including magnetic inrush and over-excitation. This paper concludes by implementing the algorithm into a prototype relay based on a digital signal processor.

  2. Systems and methods for process and user driven dynamic voltage and frequency scaling

    Science.gov (United States)

    Mallik, Arindam [Evanston, IL; Lin, Bin [Hillsboro, OR; Memik, Gokhan [Evanston, IL; Dinda, Peter [Evanston, IL; Dick, Robert [Evanston, IL

    2011-03-22

    Certain embodiments of the present invention provide a method for power management including determining at least one of an operating frequency and an operating voltage for a processor and configuring the processor based on the determined at least one of the operating frequency and the operating voltage. The operating frequency is determined based at least in part on direct user input. The operating voltage is determined based at least in part on an individual profile for processor.

  3. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  4. Investigation on a Novel Discontinuous Pulse-Width Modulation Algorithm for Single-phase Voltage Source Rectifier

    DEFF Research Database (Denmark)

    Qu, Hao; Yang, Xijun; Guo, Yougui

    2014-01-01

    Single-phase voltage source converter (VSC) is an important power electronic converter (PEC), including single-phase voltage source inverter (VSI), single-phase voltage source rectifier (VSR), single-phase active power filter (APF) and single-phase grid-connection inverter (GCI). Single-phase VSC...

  5. Algorithm of search and track of static and moving large-scale objects

    Directory of Open Access Journals (Sweden)

    Kalyaev Anatoly

    2017-01-01

    Full Text Available We suggest an algorithm for processing of a sequence, which contains images of search and track of static and moving large-scale objects. The possible software implementation of the algorithm, based on multithread CUDA processing, is suggested. Experimental analysis of the suggested algorithm implementation is performed.

  6. A quasi-Newton algorithm for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Linghua Huang

    2017-02-01

    Full Text Available Abstract In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i a conjugate gradient (CG algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm’s initial point does not have any restrictions; (ii a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length α k $\\alpha_{k}$ . The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the 1 + q $1+q$ -order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  7. 130 kV 130 A high voltage switching mode power supply for neutral beam injectors-Control issues and algorithms

    International Nuclear Information System (INIS)

    Ganuza, D.; Garcia, F.; Zulaika, M.; Perez, A.; Jones, T.T.C.

    2005-01-01

    The company JEMA has delivered to the Joint European Torus (JET facility in Culham) two high voltage switching mode power supplies (HVSMPS) each rated 130 kVdc and 130 A. One HVSMPS feeds the grids of two PINI loads. This paper describes the main control issues and the algorithms developed for the project. The most demanding requirements from the control point of view is an absolute accuracy of ±1300 V and the possibility of performing up to 255 re-applications of the high voltage during a 20 s pulse. Keeping the output voltage ripple to the specified tolerance has been a major achievement of the control system. Since the output stage is formed of several modules (120) connected in series, their stray capacitance to ground significantly influences the individual contribution of each single module to the global output voltage. Two complementary techniques have been used to balance the effects of the stray capacities. The fast re-applications requirement has a significant impact on the intermediate dc link. This section is composed of a capacity of 0.83 F, which feeds the 120 invertor modules. The dc link is fed by a 12 pulse SCR rectifier, whose matching transformers are connected to the 36 kV grid. Every re-application and every voltage shutdown supposes a quasi-instantaneous power step of 17 MW. Fast open loop algorithms have been implemented in order to keep the dc link voltage within acceptable margins. Moreover, the HVSMPS output characteristics have to be maintained during the rapid and important voltage fluctuations of the 36 kV mains (28-37 kV). The general control system is based on a Simatic S7 PLC, and a SCADA user interface. Up to 1000 signals are acquired. The control system has shown to be also a useful tool to allow for a rapid and accurate identification of faults and their origin

  8. A Parallel Algorithm for Connected Component Labelling of Gray-scale Images on Homogeneous Multicore Architectures

    International Nuclear Information System (INIS)

    Niknam, Mehdi; Thulasiraman, Parimala; Camorlinga, Sergio

    2010-01-01

    Connected component labelling is an essential step in image processing. We provide a parallel version of Suzuki's sequential connected component algorithm in order to speed up the labelling process. Also, we modify the algorithm to enable labelling gray-scale images. Due to the data dependencies in the algorithm we used a method similar to pipeline to exploit parallelism. The parallel algorithm method achieved a speedup of 2.5 for image size of 256 x 256 pixels using 4 processing threads.

  9. Thermal instability and current-voltage scaling in superconducting fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Zeimetz, B [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Tadinada, K [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Eves, D E [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Coombs, T A [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Evetts, J E [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Campbell, A M [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom)

    2004-04-01

    We have developed a computer model for the simulation of resistive superconducting fault current limiters in three dimensions. The program calculates the electromagnetic and thermal response of a superconductor to a time-dependent overload voltage, with different possible cooling conditions for the surfaces, and locally variable superconducting and thermal properties. We find that the cryogen boil-off parameters critically influence the stability of a limiter. The recovery time after a fault increases strongly with thickness. Above a critical thickness, the temperature is unstable even for a small applied AC voltage. The maximum voltage and maximum current during a short fault are correlated by a simple exponential law.

  10. Efficient Scheduling of Scientific Workflows with Energy Reduction Using Novel Discrete Particle Swarm Optimization and Dynamic Voltage Scaling for Computational Grids

    Directory of Open Access Journals (Sweden)

    M. Christobel

    2015-01-01

    Full Text Available One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO algorithm based on the particle’s best position (pbDPSO and global best position (gbDPSO is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF and First Come First Serve (FCFS algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER and Dynamic Voltage Scaling (DVS were used in the proposed DPSO algorithm.

  11. Efficient Scheduling of Scientific Workflows with Energy Reduction Using Novel Discrete Particle Swarm Optimization and Dynamic Voltage Scaling for Computational Grids

    Science.gov (United States)

    Christobel, M.; Tamil Selvi, S.; Benedict, Shajulin

    2015-01-01

    One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO) algorithm based on the particle's best position (pbDPSO) and global best position (gbDPSO) is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF) and First Come First Serve (FCFS) algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER) and Dynamic Voltage Scaling (DVS) were used in the proposed DPSO algorithm. PMID:26075296

  12. Voltage stability issues for a benchmark grid model including large scale wind power

    DEFF Research Database (Denmark)

    Eek, J.; Lund, T.; Marzio, G. Di

    2006-01-01

    The objective of the paper is to investigate how the voltage stability of a relatively weak network after a grid fault is affected by the connection of a large wind park. A theoretical discussion of the stationary and dynamic characteristics of the Short Circuit Induction Generator and the Doubly...... Fed Induction Generator is given. Further, a case study of a wind park connected to the transmission system through an existing regional 132 kV regional distribution line is presented. For the SCIG it is concluded that a stationary torque curve calculated under consideration of the impedance...... of the network and saturation of the external reactive power compensation units provides a good basis for evaluation of the voltage stability. For the DFIG it is concluded that the speed stability limit is mainly determined by the voltage limitation of the rotor converter...

  13. Control and Protection in Low Voltage Grid with Large Scale Renewable Electricity Generation

    DEFF Research Database (Denmark)

    Mustafa, Ghullam

    of the inverter controllers must be developed to Voltage-Frequency (VF) mode; and the others in either PV or PQ modes. The operation of the MG with several PV inverters and single VF inverter is similar to the operation of MG with a synchronous machine as slack bus. The VF inverter establishes the voltage...... of renewable energy based DGs are reduced CO2 emission, reduced operational cost as almost no fuel is used for their operation and less transmission and distribution losses as these units are normally built near to the load centers. This has also resulted in some operational challenges due to the unpredictable...... nature of such power generation sources. Some of the operational challenges include voltage variations due to power fluctuations coming from the DG units. On the other hand, it has also opened up some opportunities. One of the opportunities is islanding operation of the distribution system with DG unit...

  14. Chronic ciguatoxin treatment induces synaptic scaling through voltage gated sodium channels in cortical neurons.

    Science.gov (United States)

    Martín, Víctor; Vale, Carmen; Rubiolo, Juan A; Roel, Maria; Hirama, Masahiro; Yamashita, Shuji; Vieytes, Mercedes R; Botana, Luís M

    2015-06-15

    Ciguatoxins are sodium channels activators that cause ciguatera, one of the most widespread nonbacterial forms of food poisoning, which presents with long-term neurological alterations. In central neurons, chronic perturbations in activity induce homeostatic synaptic mechanisms that adjust the strength of excitatory synapses and modulate glutamate receptor expression in order to stabilize the overall activity. Immediate early genes, such as Arc and Egr1, are induced in response to activity changes and underlie the trafficking of glutamate receptors during neuronal homeostasis. To better understand the long lasting neurological consequences of ciguatera, it is important to establish the role that chronic changes in activity produced by ciguatoxins represent to central neurons. Here, the effect of a 30 min exposure of 10-13 days in vitro (DIV) cortical neurons to the synthetic ciguatoxin CTX 3C on Arc and Egr1 expression was evaluated using real-time polymerase chain reaction approaches. Since the toxin increased the mRNA levels of both Arc and Egr1, the effect of CTX 3C in NaV channels, membrane potential, firing activity, miniature excitatory postsynaptic currents (mEPSCs), and glutamate receptors expression in cortical neurons after a 24 h exposure was evaluated using electrophysiological and western blot approaches. The data presented here show that CTX 3C induced an upregulation of Arc and Egr1 that was prevented by previous coincubation of the neurons with the NaV channel blocker tetrodotoxin. In addition, chronic CTX 3C caused a concentration-dependent shift in the activation voltage of NaV channels to more negative potentials and produced membrane potential depolarization. Moreover, 24 h treatment of cortical neurons with 5 nM CTX 3C decreased neuronal firing and induced synaptic scaling mechanisms, as evidenced by a decrease in the amplitude of mEPSCs and downregulation in the protein level of glutamate receptors that was also prevented by tetrodotoxin

  15. ALGORITHM FOR DYNAMIC SCALING RELATIONAL DATABASE IN CLOUDS

    Directory of Open Access Journals (Sweden)

    Alexander V. Boichenko

    2014-01-01

    Full Text Available This article analyzes the main methods of scalingdatabases (replication, sharding and their supportat the popular relational databases and NoSQLsolutions with different data models: document-oriented, key-value, column-oriented and graph.The article presents an algorithm for the dynamicscaling of a relational database (DB, that takesinto account the specifics of the different types of logic database model. This article was prepared with the support of RFBR (grant № 13-07-00749.

  16. Thermoelectric voltage at a nanometer-scale heated tip point contact

    Science.gov (United States)

    Fletcher, Patrick C.; Lee, Byeonghee; King, William P.

    2012-01-01

    We report thermoelectric voltage measurements between the platinum-coated tip of a heated atomic force microscope (AFM) cantilever and a gold-coated substrate. The cantilevers have an integrated heater-thermometer element made from doped single crystal silicon, and a platinum tip. The voltage can be measured at the tip, independent from the cantilever heating. We used the thermocouple junction between the platinum tip and the gold substrate to measure thermoelectric voltage during heating. Experiments used either sample-side or tip-side heating, over the temperature range 25-275 °C. The tip-substrate contact is ˜4 nm in diameter and its average measured Seebeck coefficient is 3.4 μV K-1. The thermoelectric voltage is used to determine tip-substrate interface temperature when the substrate is either glass or quartz. When the non-dimensional cantilever heater temperature is 1, the tip-substrate interface temperature is 0.593 on glass and 0.125 on quartz. Thermal contact resistance between the tip and the substrate heavily influences the tip-substrate interface temperature. Measurements agree well with modeling when the tip-substrate interface contact resistance is 108 K W-1.

  17. Large Scale Solar Power Integration in Distribution Grids : PV Modelling, Voltage Support and Aggregation Studies

    NARCIS (Netherlands)

    Samadi, A.

    2014-01-01

    Long term supporting schemes for photovoltaic (PV) system installation have led to accommodating large numbers of PV systems within load pockets in distribution grids. High penetrations of PV systems can cause new technical challenges, such as voltage rise due to reverse power flow during light load

  18. Thermoelectric voltage at a nanometer-scale heated tip point contact

    International Nuclear Information System (INIS)

    Fletcher, Patrick C; Lee, Byeonghee; King, William P

    2012-01-01

    We report thermoelectric voltage measurements between the platinum-coated tip of a heated atomic force microscope (AFM) cantilever and a gold-coated substrate. The cantilevers have an integrated heater–thermometer element made from doped single crystal silicon, and a platinum tip. The voltage can be measured at the tip, independent from the cantilever heating. We used the thermocouple junction between the platinum tip and the gold substrate to measure thermoelectric voltage during heating. Experiments used either sample-side or tip-side heating, over the temperature range 25–275 °C. The tip–substrate contact is ∼4 nm in diameter and its average measured Seebeck coefficient is 3.4 μV K −1 . The thermoelectric voltage is used to determine tip–substrate interface temperature when the substrate is either glass or quartz. When the non-dimensional cantilever heater temperature is 1, the tip–substrate interface temperature is 0.593 on glass and 0.125 on quartz. Thermal contact resistance between the tip and the substrate heavily influences the tip–substrate interface temperature. Measurements agree well with modeling when the tip–substrate interface contact resistance is 10 8 K W −1 . (paper)

  19. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    Directory of Open Access Journals (Sweden)

    Hui He

    2013-01-01

    Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  20. Multi-scale graph-cut algorithm for efficient water-fat separation.

    Science.gov (United States)

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. Inversion algorithms for large-scale geophysical electromagnetic measurements

    International Nuclear Information System (INIS)

    Abubakar, A; Habashy, T M; Li, M; Liu, J

    2009-01-01

    Low-frequency surface electromagnetic prospecting methods have been gaining a lot of interest because of their capabilities to directly detect hydrocarbon reservoirs and to compliment seismic measurements for geophysical exploration applications. There are two types of surface electromagnetic surveys. The first is an active measurement where we use an electric dipole source towed by a ship over an array of seafloor receivers. This measurement is called the controlled-source electromagnetic (CSEM) method. The second is the Magnetotelluric (MT) method driven by natural sources. This passive measurement also uses an array of seafloor receivers. Both surface electromagnetic methods measure electric and magnetic field vectors. In order to extract maximal information from these CSEM and MT data we employ a nonlinear inversion approach in their interpretation. We present two types of inversion approaches. The first approach is the so-called pixel-based inversion (PBI) algorithm. In this approach the investigation domain is subdivided into pixels, and by using an optimization process the conductivity distribution inside the domain is reconstructed. The optimization process uses the Gauss–Newton minimization scheme augmented with various forms of regularization. To automate the algorithm, the regularization term is incorporated using a multiplicative cost function. This PBI approach has demonstrated its ability to retrieve reasonably good conductivity images. However, the reconstructed boundaries and conductivity values of the imaged anomalies are usually not quantitatively resolved. Nevertheless, the PBI approach can provide useful information on the location, the shape and the conductivity of the hydrocarbon reservoir. The second method is the so-called model-based inversion (MBI) algorithm, which uses a priori information on the geometry to reduce the number of unknown parameters and to improve the quality of the reconstructed conductivity image. This MBI approach can

  2. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  3. Effects of switching frequency and leakage inductance on slow-scale stability in a voltage controlled flyback converter

    International Nuclear Information System (INIS)

    Wang Fa-Qiang; Ma Xi-Kui

    2013-01-01

    The effects of both the switching frequency and the leakage inductance on the slow-scale stability in a voltage controlled flyback converter are investigated in this paper. Firstly, the system description and its mathematical model are presented. Then, the improved averaged model, which covers both the switching frequency and the leakage inductance, is established, and the effects of these two parameters on the slow-scale stability in the system are analyzed. It is found that the occurrence of Hopf bifurcation in the system is the main reason for losing its slow-scale stability and both the switching frequency and the leakage inductance have an important effect on this slow-scale stability. Finally, the effectiveness of the improved averaged model and that of the corresponding theoretical analysis are confirmed by the simulation results and the experimental results. (general)

  4. Multi-Objective Scheduling Optimization Based on a Modified Non-Dominated Sorting Genetic Algorithm-II in Voltage Source Converter−Multi-Terminal High Voltage DC Grid-Connected Offshore Wind Farms with Battery Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Ho-Young Kim

    2017-07-01

    Full Text Available Improving the performance of power systems has become a challenging task for system operators in an open access environment. This paper presents an optimization approach for solving the multi-objective scheduling problem using a modified non-dominated sorting genetic algorithm in a hybrid network of meshed alternating current (AC/wind farm grids. This approach considers voltage and power control modes based on multi-terminal voltage source converter high-voltage direct current (MTDC and battery energy storage systems (BESS. To enhance the hybrid network station performance, we implement an optimal process based on the battery energy storage system operational strategy for multi-objective scheduling over a 24 h demand profile. Furthermore, the proposed approach is formulated as a master problem and a set of sub-problems associated with the hybrid network station to improve the overall computational efficiency using Benders’ decomposition. Based on the results of the simulations conducted on modified institute of electrical and electronics engineers (IEEE-14 bus and IEEE-118 bus test systems, we demonstrate and confirm the applicability, effectiveness and validity of the proposed approach.

  5. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  6. A Case Study of Limited Dynamic Voltage Frequency Scaling in Low-Power Processors

    OpenAIRE

    Hwan Su Jung; Ahn Jun Gil; Jong Tae Kim

    2017-01-01

    Power management techniques are necessary to save power in the microprocessor. By changing the frequency and/or operating voltage of processor, DVFS can control power consumption. In this paper, we perform a case study to find optimal power state transition for DVFS. We propose the equation to find the optimal ratio between executions of states while taking into account the deadline of processing time and the power state transition delay overhead. The experiment is performed on the Cortex-M4 ...

  7. Morphological rational multi-scale algorithm for color contrast enhancement

    Science.gov (United States)

    Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.

    2010-01-01

    Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.

  8. Capacitance estimation algorithm based on DC-link voltage harmonics using artificial neural network in three-phase motor drive systems

    DEFF Research Database (Denmark)

    Soliman, Hammam Abdelaal Hammam; Davari, Pooya; Wang, Huai

    2017-01-01

    to industry. In this digest, a condition monitoring methodology that estimates the capacitance value of the dc-link capacitor in a three phase Front-End diode bridge motor drive is proposed. The proposed software methodology is based on Artificial Neural Network (ANN) algorithm. The harmonics of the dc......-link voltage are used as training data to the Artificial Neural Network. Fast Fourier Transform (FFT) of the dc-link voltage is analysed in order to study the impact of capacitance variation on the harmonics order. Laboratory experiments are conducted to validate the proposed methodology and the error analysis......In modern design of power electronic converters, reliability of dc-link capacitors is one of the critical considered aspects. The industrial field have been attracted to the monitoring of their health condition and the estimation of their ageing process status. However, the existing condition...

  9. The combination of a reduction in contrast agent dose with low tube voltage and an adaptive statistical iterative reconstruction algorithm in CT enterography: Effects on image quality and radiation dose.

    Science.gov (United States)

    Feng, Cui; Zhu, Di; Zou, Xianlun; Li, Anqin; Hu, Xuemei; Li, Zhen; Hu, Daoyu

    2018-03-01

    To investigate the subjective and quantitative image quality and radiation exposure of CT enterography (CTE) examination performed at low tube voltage and low concentration of contrast agent with adaptive statistical iterative reconstruction (ASIR) algorithm, compared with conventional CTE.One hundred thirty-seven patients with suspected or proved gastrointestinal diseases underwent contrast enhanced CTE in a multidetector computed tomography (MDCT) scanner. All cases were assigned to 2 groups. Group A (n = 79) underwent CT with low tube voltage based on patient body mass index (BMI) (BMI contrast agent (270 mg I/mL), the images were reconstructed with standard filtered back projection (FBP) algorithm and 50% ASIR algorithm. Group B (n = 58) underwent conventional CTE with 120 kVp and 350 mg I/mL contrast agent, the images were reconstructed with FBP algorithm. The computed tomography dose index volume (CTDIvol), dose length product (DLP), effective dose (ED), and total iodine dosage were calculated and compared. The CT values, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) of the normal bowel wall, gastrointestinal lesions, and mesenteric vessels were assessed and compared. The subjective image quality was assessed independently and blindly by 2 radiologists using a 5-point Likert scale.The differences of values for CTDIvol (8.64 ± 2.72 vs 11.55 ± 3.95, P  .05) and all image quality scores were greater than or equal to 3 (moderate). Fifty percent ASIR-A group images provided lower image noise, but similar or higher quantitative image quality in comparison with FBP-B group images.Compared with the conventional protocol, CTE performed at low tube voltage, low concentration of contrast agent with 50% ASIR algorithm produce a diagnostically acceptable image quality with a mean ED of 6.34 mSv and a total iodine dose reduction of 26.1%.

  10. Genome Scale Modeling in Systems Biology: Algorithms and Resources

    Science.gov (United States)

    Najafi, Ali; Bidkhori, Gholamreza; Bozorgmehr, Joseph H.; Koch, Ina; Masoudi-Nejad, Ali

    2014-01-01

    In recent years, in silico studies and trial simulations have complemented experimental procedures. A model is a description of a system, and a system is any collection of interrelated objects; an object, moreover, is some elemental unit upon which observations can be made but whose internal structure either does not exist or is ignored. Therefore, any network analysis approach is critical for successful quantitative modeling of biological systems. This review highlights some of most popular and important modeling algorithms, tools, and emerging standards for representing, simulating and analyzing cellular networks in five sections. Also, we try to show these concepts by means of simple example and proper images and graphs. Overall, systems biology aims for a holistic description and understanding of biological processes by an integration of analytical experimental approaches along with synthetic computational models. In fact, biological networks have been developed as a platform for integrating information from high to low-throughput experiments for the analysis of biological systems. We provide an overview of all processes used in modeling and simulating biological networks in such a way that they can become easily understandable for researchers with both biological and mathematical backgrounds. Consequently, given the complexity of generated experimental data and cellular networks, it is no surprise that researchers have turned to computer simulation and the development of more theory-based approaches to augment and assist in the development of a fully quantitative understanding of cellular dynamics. PMID:24822031

  11. Survey of high-voltage pulse technology suitable for large-scale plasma source ion implantation processes

    International Nuclear Information System (INIS)

    Reass, W.A.

    1994-01-01

    Many new plasma processes ideas are finding their way from the research lab to the manufacturing plant floor. These require high voltage (HV) pulse power equipment, which must be optimized for application, system efficiency, and reliability. Although no single HV pulse technology is suitable for all plasma processes, various classes of high voltage pulsers may offer a greater versatility and economy to the manufacturer. Technology developed for existing radar and particle accelerator modulator power systems can be utilized to develop a modern large scale plasma source ion implantation (PSII) system. The HV pulse networks can be broadly defined by two classes of systems, those that generate the voltage directly, and those that use some type of pulse forming network and step-up transformer. This article will examine these HV pulse technologies and discuss their applicability to the specific PSII process. Typical systems that will be reviewed will include high power solid state, hard tube systems such as crossed-field ''hollow beam'' switch tubes and planar tetrodes, and ''soft'' tube systems with crossatrons and thyratrons. Results will be tabulated and suggestions provided for a particular PSII process

  12. Time scales of bias voltage effects in FE/MgO-based magnetic tunnel junctions with voltage-dependent perpendicular anisotropy

    International Nuclear Information System (INIS)

    Lytvynenko, Ia.M.; Hauet, T.; Montaigne, F.; Bibyk, V.V.; Andrieu, S.

    2015-01-01

    Interplay between voltage-induced magnetic anisotropy transition and voltage-induced atomic diffusion is studied in epitaxial V/Fe (0.7 nm)/ MgO/ Fe(5 nm)/Co/Au magnetic tunnel junction where thin Fe soft electrode has in-plane or out-of-plane anisotropy depending on the sign of the bias voltage. We investigate the origin of the slow resistance variation occurring when switching bias voltage in opposite polarity. We demonstrate that the time to reach resistance stability after voltage switching is reduced when increasing the voltage amplitude or the temperature. A single energy barrier of about 0.2 eV height is deduced from temperature dependence. Finally, we demonstrate that the resistance change is not correlated to a change in soft electrode anisotropy. This conclusion contrasts with observations recently reported on analogous systems. - Highlights: • Voltage-induced time dependence of resistance is studied in epitaxial Fe/MgO/Fe. • Resistance change is not related to the bottom Fe/MgO interface. • The effect is thermally activated with an energy barrier of the order of 0.2 eV height

  13. Current-voltage characteristics of quantum-point contacts in the closed-channel regime: Transforming the bias voltage into an energy scale

    DEFF Research Database (Denmark)

    Gloos, K.; Utko, P.; Aagesen, M.

    2006-01-01

    We investigate the I(V) characteristics (current versus bias voltage) of side-gated quantum-point contacts, defined in GaAs/AlxGa1-xAs heterostructures. These point contacts are operated in the closed-channel regime, that is, at fixed gate voltages below zero-bias pinch-off for conductance. Our....... Such a built-in energy-voltage calibration allows us to distinguish between the different contributions to the electron transport across the pinched-off contact due to thermal activation or quantum tunneling. The first involves the height of the barrier, and the latter also its length. In the model that we...

  14. Power sharing algorithm for vector controlled six-phase AC motor with four customary three-phase voltage source inverter drive

    DEFF Research Database (Denmark)

    Padmanaban, Sanjeevikumar; Grandi, Gabriele; Blaabjerg, Frede

    2015-01-01

    This paper considered a six-phase (asymmetrical) induction motor, kept 30 phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs) and all four dc sources are deliberately kept isolated......) by nearest three vectors (NTVs) approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/ Simulink-PLECS software) of whole ac drive system by observing the dynamic behaviors in different designed condition. Set...

  15. Betweenness-based algorithm for a partition scale-free graph

    International Nuclear Information System (INIS)

    Zhang Bai-Da; Wu Jun-Jie; Zhou Jing; Tang Yu-Hua

    2011-01-01

    Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom—up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top—down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches. (interdisciplinary physics and related areas of science and technology)

  16. Power sharing algorithm for vector controlled six-phase AC motor with four customary three-phase voltage source inverter drive

    Directory of Open Access Journals (Sweden)

    Sanjeevikumar Padmanaban

    2015-09-01

    Full Text Available This paper considered a six-phase (asymmetrical induction motor, kept 30° phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs and all four dc sources are deliberately kept isolated. Therefore, zero-sequence/homopolar current components cannot flow. The original and effective power sharing algorithm is proposed in this paper with three variables (degree of freedom based on synchronous field oriented control (FOC. A standard three-level space vector pulse width modulation (SVPWM by nearest three vectors (NTVs approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/Simulink-PLECS software of whole ac drive system by observing the dynamic behaviors in different designed condition. Set of results are provided in this paper, which confirms a good agreement with theoretical development.

  17. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    Science.gov (United States)

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  18. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.

    Science.gov (United States)

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-11-07

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  19. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging

    Directory of Open Access Journals (Sweden)

    Tianzhu Yi

    2017-11-01

    Full Text Available This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR data processing. Several nonlinear chirp scaling (NLCS algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC. However, the azimuth depth of focusing (ADOF is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS algorithm that is proposed in this paper uses the method of series reverse (MSR to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  20. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  1. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  2. Impact of scaling voltage and size on the performance of Side-contacted Field Effect Diode

    Science.gov (United States)

    Touchaei, Behnam Jafari; Manavizadeh, Negin

    2018-05-01

    Side-contacted Fild Effect Diode (S-FED), with low leakage current and high Ion/Ioff ratio, has been recently introduced to suppress short channel effects in nanoscale regime. The voltage and size scalability of S-FEDs and effects on the power consumption, propagation delay time, and power delay product have been studied in this article. The most attractive properties are related to channel length to channel thickness ratio in the S-FED which reduces in comparison with MOSFET significantly, while gates control over the channel improve and the off-state current reduces dramatically. This promising advantage is not only capable to improve important S-FED's characteristics such as subthreshold slope but also eliminate Latch-up and floating body effect.

  3. Attofarad resolution capacitance-voltage measurement of nanometer scale field effect transistors utilizing ambient noise

    International Nuclear Information System (INIS)

    Gokirmak, Ali; Inaltekin, Hazer; Tiwari, Sandip

    2009-01-01

    A high resolution capacitance-voltage (C-V) characterization technique, enabling direct measurement of electronic properties at the nanoscale in devices such as nanowire field effect transistors (FETs) through the use of random fluctuations, is described. The minimum noise level required for achieving sub-aF (10 -18 F) resolution, the leveraging of stochastic resonance, and the effect of higher levels of noise are illustrated through simulations. The non-linear ΔC gate-source/drain -V gate response of FETs is utilized to determine the inversion layer capacitance (C inv ) and carrier mobility. The technique is demonstrated by extracting the carrier concentration and effective electron mobility in a nanoscale Si FET with C inv = 60 aF.

  4. Currency recognition using a smartphone: Comparison between color SIFT and gray scale SIFT algorithms

    OpenAIRE

    Iyad Abu Doush; Sahar AL-Btoush

    2017-01-01

    Banknote recognition means classifying the currency (coin and paper) to the correct class. In this paper, we developed a dataset for Jordanian currency. After that we applied automatic mobile recognition system using a smartphone on the dataset using scale-invariant feature transform (SIFT) algorithm. This is the first attempt, to the best of the authors knowledge, to recognize both coins and paper banknotes on a smartphone using SIFT algorithm. SIFT has been developed to be the most robust a...

  5. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  6. New Technique for Voltage Tracking Control of a Boost Converter Based on the PSO Algorithm and LTspice

    DEFF Research Database (Denmark)

    Farhang, Peyman; Drimus, Alin; Mátéfi-Tempfli, Stefan

    2015-01-01

    In this paper, a new technique is proposed to design a Modified PID (MPID) controller for a Boost converter. An interface between LTspice and MATLAB is carried out to implement the Particle Swarm Optimization (PSO) algorithm. The PSO algorithm which has the appropriate capability to find out...... the optimal solutions is run in MATLAB while it is interfaced with LTspice for simulation of the circuit using actual component models obtained from manufacturers. The PSO is utilized to solve the optimization problem in order to find the optimal parameters of MPID and PID controllers. The performances...

  7. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  8. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  9. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  10. Chaotic Artificial Bee Colony Algorithm for System Identification of a Small-Scale Unmanned Helicopter

    Directory of Open Access Journals (Sweden)

    Li Ding

    2015-01-01

    Full Text Available The purpose of this paper is devoted to developing a chaotic artificial bee colony algorithm (CABC for the system identification of a small-scale unmanned helicopter state-space model in hover condition. In order to avoid the premature of traditional artificial bee colony algorithm (ABC, which is stuck in local optimum and can not reach the global optimum, a novel chaotic operator with the characteristics of ergodicity and irregularity was introduced to enhance its performance. With input-output data collected from actual flight experiments, the identification results showed the superiority of CABC over the ABC and the genetic algorithm (GA. Simulations are presented to demonstrate the effectiveness of our proposed algorithm and the accuracy of the identified helicopter model.

  11. A large-scale application of the Kalman alignment algorithm to the CMS tracker

    International Nuclear Information System (INIS)

    Widl, E; Fruehwirth, R

    2008-01-01

    The Kalman alignment algorithm has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter formalism and is designed to avoid the inversion of large matrices. Most notably, the algorithm strikes a balance between conventional global and local track-based alignment algorithms, by restricting the computation of alignment parameters not only to alignable objects hit by the same track, but also to all other alignable objects that are significantly correlated. Nevertheless, this feature also comes with various trade-offs: Mechanisms are needed that affect which alignable objects are significantly correlated and keep track of these correlations. Due to the large amount of alignable objects involved at each update (at least compared to local alignment algorithms), the time spent for retrieving and writing alignment parameters as well as the required user memory becomes a significant factor. The large-scale test presented here applies the Kalman alignment algorithm to the (misaligned) CMS Tracker barrel, and demonstrates the feasibility of the algorithm in a realistic scenario. It is shown that both the computation time and the amount of required user memory are within reasonable bounds, given the available computing resources, and that the obtained results are satisfactory

  12. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    Science.gov (United States)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  13. Feedback-Based Admission Control for Firm Real-Time Task Allocation with Dynamic Voltage and Frequency Scaling

    Directory of Open Access Journals (Sweden)

    Piotr Dziurzanski

    2018-04-01

    Full Text Available Feedback-based mechanisms can be employed to monitor the performance of Multiprocessor Systems-on-Chips (MPSoCs and steer the task execution even if the exact knowledge of the workload is unknown a priori. In particular, traditional proportional-integral controllers can be used with firm real-time tasks to either admit them to the processing cores or reject in order not to violate the timeliness of the already admitted tasks. During periods with a lower computational power demand, dynamic voltage and frequency scaling (DVFS can be used to reduce the dissipation of energy in the cores while still not violating the tasks’ time constraints. Depending on the workload pattern and weight, platform size and the granularity of DVFS, energy savings can reach even 60% at the cost of a slight performance degradation.

  14. Triggering and guiding high-voltage large-scale leader discharges with sub-joule ultrashort laser pulses

    International Nuclear Information System (INIS)

    Pepin, H.; Comtois, D.; Vidal, F.; Chien, C.Y.; Desparois, A.; Johnston, T.W.; Kieffer, J.C.; La Fontaine, B.; Martin, F.; Rizk, F.A.M.; Potvin, C.; Couture, P.; Mercure, H.P.; Bondiou-Clergerie, A.; Lalande, P.; Gallimberti, I.

    2001-01-01

    The triggering and guiding of leader discharges using a plasma channel created by a sub-joule ultrashort laser pulse have been studied in a megavolt large-scale electrode configuration (3-7 m rod-plane air gap). By focusing the laser close to the positive rod electrode it has been possible, with a 400 mJ pulse, to trigger and guide leaders over distances of 3 m, to lower the leader inception voltage by 50%, and to increase the leader velocity by a factor of 10. The dynamics of the breakdown discharges with and without the laser pulse have been analyzed by means of a streak camera and of electric field and current probes. Numerical simulations have successfully reproduced many of the experimental results obtained with and without the presence of the laser plasma channel

  15. Icing Forecasting of High Voltage Transmission Line Using Weighted Least Square Support Vector Machine with Fireworks Algorithm for Feature Selection

    Directory of Open Access Journals (Sweden)

    Tiannan Ma

    2016-12-01

    Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.

  16. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    Science.gov (United States)

    Winands, G. J. J.; Liu, Z.; Pemen, A. J. M.; van Heesch, E. J. M.; Yan, K.; van Veldhuizen, E. M.

    2006-07-01

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of the discharge streamers is recorded using an ICCD camera with a shortest exposure time of 5 ns. The camera can be triggered at any moment starting from the time the voltage pulse arrives on the reactor, with an accuracy of less than 1 ns. Measurements were performed on an industrial size wire-plate reactor. The influence of pulse parameters like pulse voltage, DC bias voltage, rise-time and pulse repetition rate on plasma generation was monitored. It was observed that for higher peak voltages, an increase could be seen in the primary streamer velocity, the growth of the primary streamer diameter, the light intensity and the number of streamers per unit length of corona wire. No significant separate influence of DC bias voltage level was observed as long as the total reactor voltage (pulse + DC bias) remained constant and the DC bias voltage remained below the DC corona onset. For those situations in which the plasma appearance changed (e.g. different streamer velocity, diameter, intensity), a change in ozone production was also observed. The best chemical yields were obtained for low voltage (55 kV), low energetic pulses (0.4 J/pulse): 60 g (kWh)-1. For high voltage (86 kV), high energetic pulses (2.3 J/pulse) the yield decreased to approximately 45 g (kWh)-1, still a high value for ozone production in ambient air (RH 42%). The pulse repetition rate has no influence on plasma generation and on chemical efficiency up to 400 pulses per second.

  17. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    Energy Technology Data Exchange (ETDEWEB)

    Winands, G J J [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Liu, Z [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Pemen, A J M [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Heesch, E J M van [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Yan, K [EPS Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands); Veldhuizen, E M van [EPG Group, Department of Applied Physics, Eindhoven University of Technology, 5600 MB, Eindhoven (Netherlands)

    2006-07-21

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of the discharge streamers is recorded using an ICCD camera with a shortest exposure time of 5 ns. The camera can be triggered at any moment starting from the time the voltage pulse arrives on the reactor, with an accuracy of less than 1 ns. Measurements were performed on an industrial size wire-plate reactor. The influence of pulse parameters like pulse voltage, DC bias voltage, rise-time and pulse repetition rate on plasma generation was monitored. It was observed that for higher peak voltages, an increase could be seen in the primary streamer velocity, the growth of the primary streamer diameter, the light intensity and the number of streamers per unit length of corona wire. No significant separate influence of DC bias voltage level was observed as long as the total reactor voltage (pulse + DC bias) remained constant and the DC bias voltage remained below the DC corona onset. For those situations in which the plasma appearance changed (e.g. different streamer velocity, diameter, intensity), a change in ozone production was also observed. The best chemical yields were obtained for low voltage (55 kV), low energetic pulses (0.4 J/pulse): 60 g (kWh){sup -1}. For high voltage (86 kV), high energetic pulses (2.3 J/pulse) the yield decreased to approximately 45 g (kWh){sup -1}, still a high value for ozone production in ambient air (RH 42%). The pulse repetition rate has no influence on plasma generation and on chemical efficiency up to 400 pulses per second.

  18. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    International Nuclear Information System (INIS)

    Winands, G J J; Liu, Z; Pemen, A J M; Heesch, E J M van; Yan, K; Veldhuizen, E M van

    2006-01-01

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of the discharge streamers is recorded using an ICCD camera with a shortest exposure time of 5 ns. The camera can be triggered at any moment starting from the time the voltage pulse arrives on the reactor, with an accuracy of less than 1 ns. Measurements were performed on an industrial size wire-plate reactor. The influence of pulse parameters like pulse voltage, DC bias voltage, rise-time and pulse repetition rate on plasma generation was monitored. It was observed that for higher peak voltages, an increase could be seen in the primary streamer velocity, the growth of the primary streamer diameter, the light intensity and the number of streamers per unit length of corona wire. No significant separate influence of DC bias voltage level was observed as long as the total reactor voltage (pulse + DC bias) remained constant and the DC bias voltage remained below the DC corona onset. For those situations in which the plasma appearance changed (e.g. different streamer velocity, diameter, intensity), a change in ozone production was also observed. The best chemical yields were obtained for low voltage (55 kV), low energetic pulses (0.4 J/pulse): 60 g (kWh) -1 . For high voltage (86 kV), high energetic pulses (2.3 J/pulse) the yield decreased to approximately 45 g (kWh) -1 , still a high value for ozone production in ambient air (RH 42%). The pulse repetition rate has no influence on plasma generation and on chemical efficiency up to 400 pulses per second

  19. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    Science.gov (United States)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative

  20. Algorithmic Foundation of Spectral Rarefaction for Measuring Satellite Imagery Heterogeneity at Multiple Spatial Scales

    Science.gov (United States)

    Rocchini, Duccio

    2009-01-01

    Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600

  1. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  2. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    Science.gov (United States)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  3. Control Algorithms for Large-scale Single-axis Photovoltaic Trackers

    Directory of Open Access Journals (Sweden)

    Dorian Schneider

    2012-01-01

    Full Text Available The electrical yield of large-scale photovoltaic power plants can be greatly improved by employing solar trackers. While fixed-tilt superstructures are stationary and immobile, trackers move the PV-module plane in order to optimize its alignment to the sun. This paper introduces control algorithms for single-axis trackers (SAT, including a discussion for optimal alignment and backtracking. The results are used to simulate and compare the electrical yield of fixed-tilt and SAT systems. The proposed algorithms have been field tested, and are in operation in solar parks worldwide.

  4. Large scale network management. Condition indicators for network stations, high voltage power conductions and cables

    International Nuclear Information System (INIS)

    Eggen, Arnt Ove; Rolfseng, Lars; Langdal, Bjoern Inge

    2006-02-01

    In the Strategic Institute Programme (SIP) 'Electricity Business enters e-business (eBee)' SINTEF Energy research has developed competency that can help the energy business employ ICT systems and computer technology in an improved way. Large scale network management is now a reality, and it is characterized by large entities with increasing demands on efficiency and quality. These are goals that can only be reached by using ICT systems and computer technology in a more clever way than what is the case today. At the same time it is important that knowledge held by experienced co-workers is consulted when formal rules for evaluations and decisions in ICT systems are developed. In this project an analytical concept for evaluation of networks based information in different ICT systems has been developed. The method estimating the indicators to describe different conditions in a network is general, and indicators can be made to fit different levels of decision and network levels, for example network station, transformer circuit, distribution network and regional network. Moreover, the indicators can contain information about technical aspects, economy and HSE. An indicator consists of an indicator name, an indicator value, and an indicator colour based on a traffic-light analogy to indicate a condition or a quality for the indicator. Values on one or more indicators give an impression of important conditions in the network, and make up the basis for knowing where more detailed evaluations have to be conducted before a final decision on for example maintenance or renewal is made. A prototype has been developed for testing the new method. The prototype has been developed in Excel, and especially designed for analysing transformer circuits in a distribution network. However, the method is a general one, and well suited for implementation in a commercial computer system (ml)

  5. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    Energy Technology Data Exchange (ETDEWEB)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962 leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  7. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  8. Road network selection for small-scale maps using an improved centrality-based algorithm

    Directory of Open Access Journals (Sweden)

    Roy Weiss

    2014-12-01

    Full Text Available The road network is one of the key feature classes in topographic maps and databases. In the task of deriving road networks for products at smaller scales, road network selection forms a prerequisite for all other generalization operators, and is thus a fundamental operation in the overall process of topographic map and database production. The objective of this work was to develop an algorithm for automated road network selection from a large-scale (1:10,000 to a small-scale database (1:200,000. The project was pursued in collaboration with swisstopo, the national mapping agency of Switzerland, with generic mapping requirements in mind. Preliminary experiments suggested that a selection algorithm based on betweenness centrality performed best for this purpose, yet also exposed problems. The main contribution of this paper thus consists of four extensions that address deficiencies of the basic centrality-based algorithm and lead to a significant improvement of the results. The first two extensions improve the formation of strokes concatenating the road segments, which is crucial since strokes provide the foundation upon which the network centrality measure is computed. Thus, the first extension ensures that roundabouts are detected and collapsed, thus avoiding interruptions of strokes by roundabouts, while the second introduces additional semantics in the process of stroke formation, allowing longer and more plausible strokes to built. The third extension detects areas of high road density (i.e., urban areas using density-based clustering and then locally increases the threshold of the centrality measure used to select road segments, such that more thinning takes place in those areas. Finally, since the basic algorithm tends to create dead-ends—which however are not tolerated in small-scale maps—the fourth extension reconnects these dead-ends to the main network, searching for the best path in the main heading of the dead-end.

  9. Clustering for Different Scales of Measurement - the Gap-Ratio Weighted K-means Algorithm

    OpenAIRE

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    This paper describes a method for clustering data that are spread out over large regions and which dimensions are on different scales of measurement. Such an algorithm was developed to implement a robotics application consisting in sorting and storing objects in an unsupervised way. The toy dataset used to validate such application consists of Lego bricks of different shapes and colors. The uncontrolled lighting conditions together with the use of RGB color features, respectively involve data...

  10. Time scale algorithm: Definition of ensemble time and possible uses of the Kalman filter

    Science.gov (United States)

    Tavella, Patrizia; Thomas, Claudine

    1990-01-01

    The comparative study of two time scale algorithms, devised to satisfy different but related requirements, is presented. They are ALGOS(BIPM), producing the international reference TAI at the Bureau International des Poids et Mesures, and AT1(NIST), generating the real-time time scale AT1 at the National Institute of Standards and Technology. In each case, the time scale is a weighted average of clock readings, but the weight determination and the frequency prediction are different because they are adapted to different purposes. The possibility of using a mathematical tool, such as the Kalman filter, together with the definition of the time scale as a weighted average, is also analyzed. Results obtained by simulation are presented.

  11. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  12. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  13. Temporal development and chemical efficiency of positive streamers in a large scale wire-plate reactor as a function of voltage waveform parameters

    NARCIS (Netherlands)

    Winands, G.J.J.; Liu, Zhen; Pemen, A.J.M.; Heesch, van E.J.M.; Yan, K.; Veldhuizen, van E.M.

    2006-01-01

    In this paper a large-scale pulsed corona system is described in which pulse parameters such as pulse rise-time, peak voltage, pulse width and energy per pulse can be varied. The chemical efficiency of the system is determined by measuring ozone production. The temporal and spatial development of

  14. Diffuse mode and diffuse-to-filamentary transition in a high pressure nanosecond scale corona discharge under high voltage

    International Nuclear Information System (INIS)

    Tardiveau, P; Moreau, N; Bentaleb, S; Postel, C; Pasquiers, S

    2009-01-01

    The dynamics of a point-to-plane corona discharge induced in high pressure air under nanosecond scale high overvoltage is investigated. The electrical and optical properties of the discharge can be described in space and time with fast and precise current measurements coupled to gated and intensified imaging. Under atmospheric pressure, the discharge exhibits a diffuse pattern like a multielectron avalanche propagating through a direct field ionization mechanism. The diffuse regime can exist since the voltage rise time is much shorter than the characteristic time of the field screening effects, and as long as the local field is higher than the critical ionization field in air. As one of these conditions is not fulfilled, the discharge turns into a multi-channel regime and the diffuse-to-filamentary transition strongly depends on the overvoltage, the point-to-plane gap length and the pressure. When pressure is increased above atmospheric pressure, the diffuse stage and its transition to streamers seem to satisfy similarity rules as the key parameter is the reduced critical ionization field only. However, above 3 bar, neither diffuse avalanche nor streamer filaments are observed but a kind of streamer-leader regime, due to the fact that mechanisms such as photoionization and heat diffusion are not similar to pressure.

  15. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  16. A New Method Based On Modified Shuffled Frog Leaping Algorithm In Order To Solve Nonlinear Large Scale Problem

    Directory of Open Access Journals (Sweden)

    Aliasghar Baziar

    2015-03-01

    Full Text Available Abstract In order to handle large scale problems this study has used shuffled frog leaping algorithm. This algorithm is an optimization method based on natural memetics that uses a new two-phase modification to it to have a better search in the problem space. The suggested algorithm is evaluated by comparing to some well known algorithms using several benchmark optimization problems. The simulation results have clearly shown the superiority of this algorithm over other well-known methods in the area.

  17. A New Pose Estimation Algorithm Using a Perspective-Ray-Based Scaled Orthographic Projection with Iteration.

    Directory of Open Access Journals (Sweden)

    Pengfei Sun

    Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.

  18. Large-Scale Portfolio Optimization Using Multiobjective Evolutionary Algorithms and Preselection Methods

    Directory of Open Access Journals (Sweden)

    B. Y. Qu

    2017-01-01

    Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.

  19. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  20. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  1. Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport

    International Nuclear Information System (INIS)

    Howell, L H

    2004-01-01

    Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common

  2. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    International Nuclear Information System (INIS)

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  3. Two-dimensional pencil beam scaling: an improved proton dose algorithm for heterogeneous media

    International Nuclear Information System (INIS)

    Szymanowski, Hanitra; Oelfke, Uwe

    2002-01-01

    New dose delivery techniques with proton beams, such as beam spot scanning or raster scanning, require fast and accurate dose algorithms which can be applied for treatment plan optimization in clinically acceptable timescales. The clinically required accuracy is particularly difficult to achieve for the irradiation of complex, heterogeneous regions of the patient's anatomy. Currently applied fast pencil beam dose calculations based on the standard inhomogeneity correction of pathlength scaling often cannot provide the accuracy required for clinically acceptable dose distributions. This could be achieved with sophisticated Monte Carlo simulations which are still unacceptably time consuming for use as dose engines in optimization calculations. We therefore present a new algorithm for proton dose calculations which aims to resolve the inherent problem between calculation speed and required clinical accuracy. First, a detailed derivation of the new concept, which is based on an additional scaling of the lateral proton fluence is provided. Then, the newly devised two-dimensional (2D) scaling method is tested for various geometries of different phantom materials. These include standard biological tissues such as bone, muscle and fat as well as air. A detailed comparison of the new 2D pencil beam scaling with the current standard pencil beam approach and Monte Carlo simulations, performed with GEANT, is presented. It was found that the new concept proposed allows calculation of absorbed dose with an accuracy almost equal to that achievable with Monte Carlo simulations while requiring only modestly increased calculation times in comparison to the standard pencil beam approach. It is believed that this new proton dose algorithm has the potential to significantly improve the treatment planning outcome for many clinical cases encountered in highly conformal proton therapy. (author)

  4. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    Science.gov (United States)

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    Directory of Open Access Journals (Sweden)

    Ambika Ramamoorthy

    2016-01-01

    Full Text Available Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF and weak (WK bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5 and PQ capacities of DGs (P alone, Q alone, and  P and Q both are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  6. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    Science.gov (United States)

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  7. Mean field theory of EM algorithm for Bayesian grey scale image restoration

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Tanaka, Kazuyuki

    2003-01-01

    The EM algorithm for the Bayesian grey scale image restoration is investigated in the framework of the mean field theory. Our model system is identical to the infinite range random field Q-Ising model. The maximum marginal likelihood method is applied to the determination of hyper-parameters. We calculate both the data-averaged mean square error between the original image and its maximizer of posterior marginal estimate, and the data-averaged marginal likelihood function exactly. After evaluating the hyper-parameter dependence of the data-averaged marginal likelihood function, we derive the EM algorithm which updates the hyper-parameters to obtain the maximum likelihood estimate analytically. The time evolutions of the hyper-parameters and so-called Q function are obtained. The relation between the speed of convergence of the hyper-parameters and the shape of the Q function is explained from the viewpoint of dynamics

  8. Picosecond scale experimental verification of a globally convergent algorithm for a coefficient inverse problem

    International Nuclear Information System (INIS)

    Klibanov, Michael V; Pantong, Natee; Fiddy, Michael A; Schenk, John; Beilina, Larisa

    2010-01-01

    A globally convergent algorithm by the first and third authors for a 3D hyperbolic coefficient inverse problem is verified on experimental data measured in the picosecond scale regime. Quantifiable images of dielectric abnormalities are obtained. The total measurement timing of a 100 ps pulse for one detector location was 1.2 ns with 20 ps (=0.02 ns) time step between two consecutive readings. Blind tests have consistently demonstrated an accurate imaging of refractive indexes of dielectric abnormalities. At the same time, it is shown that a modified gradient method is inapplicable to this kind of experimental data. This inverse algorithm is also applicable to other types of imaging modalities, e.g. acoustics. Potential applications are in airport security, imaging of land mines, imaging of defects in non-distractive testing, etc

  9. Currency recognition using a smartphone: Comparison between color SIFT and gray scale SIFT algorithms

    Directory of Open Access Journals (Sweden)

    Iyad Abu Doush

    2017-10-01

    Full Text Available Banknote recognition means classifying the currency (coin and paper to the correct class. In this paper, we developed a dataset for Jordanian currency. After that we applied automatic mobile recognition system using a smartphone on the dataset using scale-invariant feature transform (SIFT algorithm. This is the first attempt, to the best of the authors knowledge, to recognize both coins and paper banknotes on a smartphone using SIFT algorithm. SIFT has been developed to be the most robust and efficient local invariant feature descriptor. Color provides significant information and important values in the object description process and matching tasks. Many objects cannot be classified correctly without their color features. We compared between two approaches colored local invariant feature descriptor (color SIFT approach and gray image local invariant feature descriptor (gray SIFT approach. The evaluation results show that the color SIFT approach outperforms the gray SIFT approach in terms of processing time and accuracy.

  10. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    Science.gov (United States)

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system

  11. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  12. A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Eric J. Nava

    2012-03-01

    This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.

  13. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Choi Jeonghee

    2008-01-01

    Full Text Available Abstract So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  14. Scales of Time Where the Quantum Discord Allows an Efficient Execution of the DQC1 Algorithm

    Directory of Open Access Journals (Sweden)

    M. Ávila

    2014-01-01

    Full Text Available The power of one qubit deterministic quantum processor (DQC1 (Knill and Laflamme (1998 generates a nonclassical correlation known as quantum discord. The DQC1 algorithm executes in an efficient way with a characteristic time given by τ=Tr[Un]/2n, where Un is an n qubit unitary gate. For pure states, quantum discord means entanglement while for mixed states such a quantity is more than entanglement. Quantum discord can be thought of as the mutual information between two systems. Within the quantum discord approach the role of time in an efficient evaluation of τ is discussed. It is found that the smaller the value of t/T is, where t is the time of execution of the DQC1 algorithm and T is the scale of time where the nonclassical correlations prevail, the more efficient the calculation of τ is. A Mösbauer nucleus might be a good processor of the DQC1 algorithm while a nuclear spin chain would not be efficient for the calculation of τ.

  15. A solution to the economic dispatch using EP based SA algorithm on large scale power system

    Energy Technology Data Exchange (ETDEWEB)

    Christober Asir Rajan, C. [Department of EEE, Pondicherry Engineering College, Pondicherry 605 014 (India)

    2010-07-15

    This paper develops a new approach for solving the Economic Load Dispatch (ELD) using an integrated algorithm based on Evolutionary Programming (EP) and Simulated Annealing (SA) on large scale power system. Classical methods employed for solving Economic Load Dispatch are calculus-based. For generator units having quadratic fuel cost functions, the classical techniques ignore or flatten out the portions of the incremental fuel cost curves and so may be have difficulties in the determination of the global optimum solution for non-differentiable fuel cost functions. To overcome these problems, the intelligent techniques, namely, Evolutionary Programming and Simulated Annealing are employed. The above said optimization techniques are capable of determining the global or near global optimum dispatch solutions. The validity and effectiveness of the proposed integrated algorithm has been tested with 66-bus Indian utility system, IEEE 5-bus, 30-bus, 118-bus system. And the test results are compared with the results obtained from other methods. Numerical results show that the proposed integrated algorithm can provide accurate solutions within reasonable time for any type of fuel cost functions. (author)

  16. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Yongwan Park

    2008-12-01

    Full Text Available So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  17. Conceptual design based on scale laws and algorithms for sub-critical transmutation reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kwang Gu; Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    In order to conduct the effective integration of computer-aided conceptual design for integrated nuclear power reactor, not only is a smooth information flow required, but also decision making for both conceptual design and construction process design must be synthesized. In addition to the aboves, the relations between the one step and another step and the methodologies to optimize the decision variables are verified, in this paper especially, that is, scaling laws and scaling criteria. In the respect with the running of the system, the integrated optimization process is proposed in which decisions concerning both conceptual design are simultaneously made. According to the proposed reactor types and power levels, an integrated optimization problems are formulated. This optimization is expressed as a multi-objective optimization problem. The algorithm for solving the problem is also presented. The proposed method is applied to designing a integrated sub-critical reactors. 6 refs., 5 figs., 1 tab. (Author)

  18. Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis

    Science.gov (United States)

    Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg

    2017-03-01

    We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.

  19. Conceptual design based on scale laws and algorithms for sub-critical transmutation reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kwang Gu; Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    In order to conduct the effective integration of computer-aided conceptual design for integrated nuclear power reactor, not only is a smooth information flow required, but also decision making for both conceptual design and construction process design must be synthesized. In addition to the aboves, the relations between the one step and another step and the methodologies to optimize the decision variables are verified, in this paper especially, that is, scaling laws and scaling criteria. In the respect with the running of the system, the integrated optimization process is proposed in which decisions concerning both conceptual design are simultaneously made. According to the proposed reactor types and power levels, an integrated optimization problems are formulated. This optimization is expressed as a multi-objective optimization problem. The algorithm for solving the problem is also presented. The proposed method is applied to designing a integrated sub-critical reactors. 6 refs., 5 figs., 1 tab. (Author)

  20. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  1. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  2. EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.

    Science.gov (United States)

    Tal-Ezer, Hillel

    2016-05-19

    Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.

  3. Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.

    Science.gov (United States)

    Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel

    2016-01-01

    The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.

  4. Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks

    DEFF Research Database (Denmark)

    Cao, Bin; Zhao, Jianwei; Yang, Po

    2018-01-01

    -objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively.......Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface...... (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large...

  5. The efficiency of average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling in identifying homogeneous precipitation catchments

    Science.gov (United States)

    Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan

    2018-04-01

    Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.

  6. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James W. [Univ. of California, Berkeley, CA (United States)

    2017-09-14

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emerging memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a

  7. Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials

    Science.gov (United States)

    Ng, Joanna L.; Knothe, Lillian E.; Whan, Renee M.; Knothe, Ulf; Tate, Melissa L. Knothe

    2017-01-01

    We are literally the stuff from which our tissue fabrics and their fibers are woven and spun. The arrangement of collagen, elastin and other structural proteins in space and time embodies our tissues and organs with amazing resilience and multifunctional smart properties. For example, the periosteum, a soft tissue sleeve that envelops all nonarticular bony surfaces of the body, comprises an inherently “smart” material that gives hard bones added strength under high impact loads. Yet a paucity of scalable bottom-up approaches stymies the harnessing of smart tissues’ biological, mechanical and organizational detail to create advanced functional materials. Here, a novel approach is established to scale up the multidimensional fiber patterns of natural soft tissue weaves for rapid prototyping of advanced functional materials. First second harmonic generation and two-photon excitation microscopy is used to map the microscopic three-dimensional (3D) alignment, composition and distribution of the collagen and elastin fibers of periosteum, the soft tissue sheath bounding all nonarticular bone surfaces in our bodies. Then, using engineering rendering software to scale up this natural tissue fabric, as well as multidimensional weaving algorithms, macroscopic tissue prototypes are created using a computer-controlled jacquard loom. The capacity to prototype scaled up architectures of natural fabrics provides a new avenue to create advanced functional materials.

  8. Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks

    KAUST Repository

    Douik, Ahmed

    2017-08-30

    Consider a large-scale anonymous wireless sensor network with unknown cardinality. In such graphs, each node has no information about the network topology and only possesses a unique identifier. This paper introduces a novel distributed algorithm for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more efficient coding schemes for the network, the topology discovery provides a reliable way for routing packets. The proposed algorithm is shown to produce a cardinality estimate proportional to the best linear unbiased estimator for dense graphs and specific running times. Simulation results attest the theoretical results and reveal that, for a reasonable running time, querying a small group of nodes is sufficient to perform an estimation of 95% of the whole network. Applications of this work include estimating the number of Internet of Things (IoT) sensor devices, online social users, active protein cells, etc.

  9. A local adaptive algorithm for emerging scale-free hierarchical networks

    International Nuclear Information System (INIS)

    Gomez Portillo, I J; Gleiser, P M

    2010-01-01

    In this work we study a growing network model with chaotic dynamical units that evolves using a local adaptive rewiring algorithm. Using numerical simulations we show that the model allows for the emergence of hierarchical networks. First, we show that the networks that emerge with the algorithm present a wide degree distribution that can be fitted by a power law function, and thus are scale-free networks. Using the LaNet-vi visualization tool we present a graphical representation that reveals a central core formed only by hubs, and also show the presence of a preferential attachment mechanism. In order to present a quantitative analysis of the hierarchical structure we analyze the clustering coefficient. In particular, we show that as the network grows the clustering becomes independent of system size, and also presents a power law decay as a function of the degree. Finally, we compare our results with a similar version of the model that has continuous non-linear phase oscillators as dynamical units. The results show that local interactions play a fundamental role in the emergence of hierarchical networks.

  10. Management Of Large Scale Osmotic Dehydration Solution Using The Pearsons Square Algorithm

    Directory of Open Access Journals (Sweden)

    Oladejo Duduyemi

    2015-01-01

    Full Text Available ABSTRACT Osmotic dehydration is a widely researched and advantageous pre-treatment process in food preservation but has not enjoyed industrial acceptance because if its highly concentrated and voluminous effluent. The Pearsons square algorithm was employed to give a focussed attack to the problem by developing a user-friendly template for reconstituting effluents for recycling purposes using Java script programme. Outflow from a pilot scale plant was reactivated and introduced into a scheme of operation for continuous OD of fruits and vegetables. Screened and re-concentrated effluent were subjected to statistical analysis in comparison to initial concentrations solution at confidence limit of p0.05. The template proven to be an adequate representation of the Pearsons square algorithm it is sufficiently good in reconstituting used osmotic solutions for repetitive usage. This protocol if adopted in the industry is not only environmentally friendly but also promises significant economic improvement of OD process. Application Recycling of non-reacting media and as a template for automation in continuous OD processing.

  11. Algorithms for large scale singular value analysis of spatially variant tomography systems

    International Nuclear Information System (INIS)

    Cao-Huu, Tuan; Brownell, G.; Lachiver, G.

    1996-01-01

    The problem of determining the eigenvalues of large matrices occurs often in the design and analysis of modem tomography systems. As there is an interest in solving systems containing an ever-increasing number of variables, current research effort is being made to create more robust solvers which do not depend on some special feature of the matrix for convergence (e.g. block circulant), and to improve the speed of already known and understood solvers so that solving even larger systems in a reasonable time becomes viable. Our standard techniques for singular value analysis are based on sparse matrix factorization and are not applicable when the input matrices are large because the algorithms cause too much fill. Fill refers to the increase of non-zero elements in the LU decomposition of the original matrix A (the system matrix). So we have developed iterative solutions that are based on sparse direct methods. Data motion and preconditioning techniques are critical for performance. This conference paper describes our algorithmic approaches for large scale singular value analysis of spatially variant imaging systems, and in particular of PCR2, a cylindrical three-dimensional PET imager 2 built at the Massachusetts General Hospital (MGH) in Boston. We recommend the desirable features and challenges for the next generation of parallel machines for optimal performance of our solver

  12. A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems

    Directory of Open Access Journals (Sweden)

    Xuhao Zhang

    2014-01-01

    Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.

  13. A numerical formulation and algorithm for limit and shakedown analysis of large-scale elastoplastic structures

    Science.gov (United States)

    Peng, Heng; Liu, Yinghua; Chen, Haofeng

    2018-05-01

    In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.

  14. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  15. HISTOPATHOLOGICAL SCALE AND SYNOVITIS ALGORITHM – 15 YEARS OF EXPERIENCE: EVALUATION AND FOLLOWING PROGRESS

    Directory of Open Access Journals (Sweden)

    V. Krenn

    2017-01-01

      inflammatory antigens  were suggested  for immunohistochemical analysis (including Ki-67, CD68-, CD3-, CD15и CD20.  This immunohistochemical scale and subdivision into low and high degree synovitis  provided  a possibility  to assess the risk of development and biological sensitivity of rheumatoid arthritis. Thus, an important histological  input  was made into primary rheumatology diagnostics which did not consider tissue  changes.  Due  to  formal  integration of synovitis  scale into  the  algorithm of synovial  pathology  diagnostics a comprehensive classification was developed specifically for differentiated orthopaedics diagnostics.

  16. The leaf-level emission factor of volatile isoprenoids: caveats, model algorithms, response shapes and scaling

    Directory of Open Access Journals (Sweden)

    Ü. Niinemets

    2010-06-01

    Full Text Available In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission potential under specified environmental conditions, also called as the emission factor, ES. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993, instantaneous variation of the steady-state emission rate is described as the product of ES and light and temperature response functions. When these models are employed in the atmospheric chemistry modeling community, species-specific ES values and parameter values defining the instantaneous response curves are often taken as initially defined. In the current review, we argue that ES as a characteristic used in the models importantly depends on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during experimental ES determinations. In particular, there is now increasing consensus that in addition to variations in light and temperature, alterations in atmospheric and/or within-leaf CO2 concentrations may need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility. Because of these combined biochemical and physico-chemical drivers, specification of ES as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature as used in the standard algorithms. The definition of ES also varies depending on the degree of aggregation of ES values in different parameterization schemes (leaf- vs. canopy- or region-scale, species vs. plant functional type levels and various

  17. The tensor hypercontracted parametric reduced density matrix algorithm: coupled-cluster accuracy with O(r(4)) scaling.

    Science.gov (United States)

    Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David

    2013-08-07

    Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T).

  18. Algorithm оf Computer Model Realization оf High-Frequency Processes in Switchgears Containing Non-Linear Over-Voltage Limiters

    Directory of Open Access Journals (Sweden)

    Ye. V. Dmitriev

    2007-01-01

    Full Text Available Analysis of the Over-Voltage Limiter (OVL influence on electromagnetic high-frequency over-voltages at commutations with isolators of unloaded sections of wires and possibility of application of a frequency-dependent resistor in case of necessity to facilitate OVL operation conditions is provided in the paper.It is shown that it is necessary to take into account characteristics of OVL by IEEE circuit and its modifications at computer modeling of high-frequency over-voltages.

  19. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  20. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  1. Fuzzy Logic-Based Perturb and Observe Algorithm with Variable Step of a Reference Voltage for Solar Permanent Magnet Synchronous Motor Drive System Fed by Direct-Connected Photovoltaic Array

    Directory of Open Access Journals (Sweden)

    Mohamed Redha Rezoug

    2018-02-01

    Full Text Available Photovoltaic pumping is considered to be the most used application amongst other photovoltaic energy applications in isolated sites. This technology is developing with a slow progression to allow the photovoltaic system to operate at its maximum power. This work introduces the modified algorithm which is a perturb and observe (P&O type to overcome the limitations of the conventional P&O algorithm and increase its global performance in abrupt weather condition changes. The most significant conventional P&O algorithm restriction is the difficulty faced when choosing the variable step of the reference voltage value, a good compromise between the swift dynamic response and the stability in the steady state. To adjust the step reference voltage according to the location of the operating point of the maximum power point (MPP, a fuzzy logic controller (FLC block adapted to the P&O algorithm is used. This allows the improvement of the tracking pace and the steady state oscillation elimination. The suggested method was evaluated by simulation using MATLAB/SimPowerSystems blocks and compared to the classical P&O under different irradiation levels. The results obtained show the effectiveness of the technique proposed and its capacity for the practical and efficient tracking of maximum power.

  2. Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems

    Energy Technology Data Exchange (ETDEWEB)

    Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj; Haglin, David J.

    2012-07-03

    We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.

  3. A Refined Self-Tuning Filter-Based Instantaneous Power Theory Algorithm for Indirect Current Controlled Three-Level Inverter-Based Shunt Active Power Filters under Non-sinusoidal Source Voltage Conditions

    Directory of Open Access Journals (Sweden)

    Yap Hoon

    2017-02-01

    Full Text Available In this paper, a refined reference current generation algorithm based on instantaneous power (pq theory is proposed, for operation of an indirect current controlled (ICC three-level neutral-point diode clamped (NPC inverter-based shunt active power filter (SAPF under non-sinusoidal source voltage conditions. SAPF is recognized as one of the most effective solutions to current harmonics due to its flexibility in dealing with various power system conditions. As for its controller, pq theory has widely been applied to generate the desired reference current due to its simple implementation features. However, the conventional dependency on self-tuning filter (STF in generating reference current has significantly limited mitigation performance of SAPF. Besides, the conventional STF-based pq theory algorithm is still considered to possess needless features which increase computational complexity. Furthermore, the conventional algorithm is mostly designed to suit operation of direct current controlled (DCC SAPF which is incapable of handling switching ripples problems, thereby leading to inefficient mitigation performance. Therefore, three main improvements are performed which include replacement of STF with mathematical-based fundamental real power identifier, removal of redundant features, and generation of sinusoidal reference current. To validate effectiveness and feasibility of the proposed algorithm, simulation work in MATLAB-Simulink and laboratory test utilizing a TMS320F28335 digital signal processor (DSP are performed. Both simulation and experimental findings demonstrate superiority of the proposed algorithm over the conventional algorithm.

  4. Planimetric Features Generalization for the Production of Small-Scale Map by Using Base Maps and the Existing Algorithms

    Directory of Open Access Journals (Sweden)

    M. Modiri

    2014-10-01

    Full Text Available Cartographic maps are representations of the Earth upon a flat surface in the smaller scale than it’s true. Large scale maps cover relatively small regions in great detail and small scale maps cover large regions such as nations, continents and the whole globe. Logical connection between the features and scale map must be maintained by changing the scale and it is important to recognize that even the most accurate maps sacrifice a certain amount of accuracy in scale to deliver a greater visual usefulness to its user. Cartographic generalization, or map generalization, is the method whereby information is selected and represented on a map in a way that adapts to the scale of the display medium of the map, not necessarily preserving all intricate geographical or other cartographic details. Due to the problems facing small-scale map production process and the need to spend time and money for surveying, today’s generalization is used as executive approach. The software is proposed in this paper that converted various data and information to certain Data Model. This software can produce generalization map according to base map using the existing algorithm. Planimetric generalization algorithms and roles are described in this article. Finally small-scale maps with 1:100,000, 1:250,000 and 1:500,000 scale are produced automatically and they are shown at the end.

  5. Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.

    Science.gov (United States)

    Semnani, Samaneh Hosseini; Basir, Otman A

    2015-01-01

    The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms.

  6. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  7. An {Mathematical expression} iteration bound primal-dual cone affine scaling algorithm for linear programmingiteration bound primal-dual cone affine scaling algorithm for linear programming

    NARCIS (Netherlands)

    J.F. Sturm; J. Zhang (Shuzhong)

    1996-01-01

    textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993;

  8. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC

    Directory of Open Access Journals (Sweden)

    Xiangyu Li

    2017-02-01

    Full Text Available This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430, and that it can make a system do more valuable works and make more than 99.9% use of the power budget.

  9. Near-Field Three-Dimensional Planar Millimeter-Wave Holographic Imaging by Using Frequency Scaling Algorithm

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2017-10-01

    Full Text Available In this paper, a fast three-dimensional (3-D frequency scaling algorithm (FSA with large depth of focus is presented for near-field planar millimeter-wave (MMW holographic imaging. Considering the cross-range range coupling term which is neglected in the conventional range migration algorithm (RMA, we propose an algorithm performing the range cell migration correction for de-chirped signals without interpolation by using a 3-D frequency scaling operation. First, to deal with the cross-range range coupling term, a 3-D frequency scaling operator is derived to eliminate the space variation of range cell migration. Then, a range migration correction factor is performed to compensate for the residual range cell migration. Finally, the imaging results are obtained by matched filtering in the cross-range direction. Compared with the conventional RMA, the proposed algorithm is comparable in accuracy but more efficient by using only chirp multiplications and fast Fourier transforms (FFTs. The algorithm has been tested with satisfying results by both simulation and experiment.

  10. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  11. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Science.gov (United States)

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  12. A Parametric Genetic Algorithm Approach to Assess Complementary Options of Large Scale Wind-solar Coupling

    Institute of Scientific and Technical Information of China (English)

    Tim; Mareda; Ludovic; Gaudard; Franco; Romerio

    2017-01-01

    The transitional path towards a highly renewable power system based on wind and solar energy sources is investigated considering their intermittent and spatially distributed characteristics. Using an extensive weather-driven simulation of hourly power mismatches between generation and load, we explore the interplay between geographical resource complementarity and energy storage strategies. Solar and wind resources are considered at variable spatial scales across Europe and related to the Swiss load curve, which serve as a typical demand side reference. The optimal spatial distribution of renewable units is further assessed through a parameterized optimization method based on a genetic algorithm. It allows us to explore systematically the effective potential of combined integration strategies depending on the sizing of the system, with a focus on how overall performance is affected by the definition of network boundaries. Upper bounds on integration schemes are provided considering both renewable penetration and needed reserve power capacity. The quantitative trade-off between grid extension, storage and optimal wind-solar mix is highlighted.This paper also brings insights on how optimal geographical distribution of renewable units evolves as a function of renewable penetration and grid extent.

  13. Improving Genetic Algorithm with Fine-Tuned Crossover and Scaled Architecture

    Directory of Open Access Journals (Sweden)

    Ajay Shrestha

    2016-01-01

    Full Text Available Genetic Algorithm (GA is a metaheuristic used in solving combinatorial optimization problems. Inspired by evolutionary biology, GA uses selection, crossover, and mutation operators to efficiently traverse the solution search space. This paper proposes nature inspired fine-tuning to the crossover operator using the untapped idea of Mitochondrial DNA (mtDNA. mtDNA is a small subset of the overall DNA. It differentiates itself by inheriting entirely from the female, while the rest of the DNA is inherited equally from both parents. This unique characteristic of mtDNA can be an effective mechanism to identify members with similar genes and restrict crossover between them. It can reduce the rate of dilution of diversity and result in delayed convergence. In addition, we scale the well-known Island Model, where instances of GA are run independently and population members exchanged periodically, to a Continental Model. In this model, multiple web services are executed with each web service running an island model. We applied the concept of mtDNA in solving Traveling Salesman Problem and to train Neural Network for function approximation. Our implementation tests show that leveraging these new concepts of mtDNA and Continental Model results in relative improvement of the optimization quality of GA.

  14. Partial spin absorption induced magnetization switching and its voltage-assisted improvement in an asymmetrical all spin logic device at the mesoscopic scale

    Science.gov (United States)

    Zhang, Yue; Zhang, Zhizhong; Wang, Lezhi; Nan, Jiang; Zheng, Zhenyi; Li, Xiang; Wong, Kin; Wang, Yu; Klein, Jacques-Olivier; Khalili Amiri, Pedram; Zhang, Youguang; Wang, Kang L.; Zhao, Weisheng

    2017-07-01

    Beyond memory and storage, future logic applications put forward higher requirements for electronic devices. All spin logic devices (ASLDs) have drawn exceptional interest as they utilize pure spin current instead of charge current, which could promise ultra-low power consumption. However, relatively low efficiencies of spin injection, transport, and detection actually impede high-speed magnetization switching and challenge perspectives of ASLD. In this work, we study partial spin absorption induced magnetization switching in asymmetrical ASLD at the mesoscopic scale, in which the injector and detector have the nano-fabrication compatible device size (>100 nm) and their contact areas are different. The enlarged contact area of the detector is conducive to the spin current absorption, and the contact resistance difference between the injector and the detector can decrease the spin current backflow. Rigorous spin circuit modeling and micromagnetic simulations have been carried out to analyze the electrical and magnetic features. The results show that, at the fabrication-oriented technology scale, the ferromagnetic layer can hardly be switched by geometrically partial spin current absorption. The voltage-controlled magnetic anisotropy (VCMA) effect has been applied on the detector to accelerate the magnetization switching by modulating magnetic anisotropy of the ferromagnetic layer. With a relatively high VCMA coefficient measured experimentally, a voltage of 1.68 V can assist the whole magnetization switching within 2.8 ns. This analysis and improving approach will be of significance for future low-power, high-speed logic applications.

  15. Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.

    Science.gov (United States)

    Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji

    2015-12-01

    We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.

  16. Unbalanced Voltage Compensation in Low Voltage Residential AC Grids

    DEFF Research Database (Denmark)

    Trintis, Ionut; Douglass, Philip; Munk-Nielsen, Stig

    2016-01-01

    This paper describes the design and test of a control algorithm for active front-end rectifiers that draw power from a residential AC grid to feed heat pump loads. The control algorithm is able to control the phase to neutral or phase to phase RMS voltages at the point of common coupling...

  17. Design and Large-Scale Evaluation of Educational Games for Teaching Sorting Algorithms

    Science.gov (United States)

    Battistella, Paulo Eduardo; von Wangenheim, Christiane Gresse; von Wangenheim, Aldo; Martina, Jean Everson

    2017-01-01

    The teaching of sorting algorithms is an essential topic in undergraduate computing courses. Typically the courses are taught through traditional lectures and exercises involving the implementation of the algorithms. As an alternative, this article presents the design and evaluation of three educational games for teaching Quicksort and Heapsort.…

  18. A landscape scale valley confinement algorithm: Delineating unconfined valley bottoms for geomorphic, aquatic, and riparian applications

    Science.gov (United States)

    David E. Nagel; John M. Buffington; Sharon L. Parkes; Seth Wenger; Jaime R. Goode

    2014-01-01

    Valley confinement is an important landscape characteristic linked to aquatic habitat, riparian diversity, and geomorphic processes. This report describes a GIS program called the Valley Confinement Algorithm (VCA), which identifies unconfined valleys in montane landscapes. The algorithm uses nationally available digital elevation models (DEMs) at 10-30 m resolution to...

  19. 18/20 T high magnetic field scanning tunneling microscope with fully low voltage operability, high current resolution, and large scale searching ability.

    Science.gov (United States)

    Li, Quanfeng; Wang, Qi; Hou, Yubin; Lu, Qingyou

    2012-04-01

    We present a home-built 18/20 T high magnetic field scanning tunneling microscope (STM) featuring fully low voltage (lower than ±15 V) operability in low temperatures, large scale searching ability, and 20 fA high current resolution (measured by using a 100 GOhm dummy resistor to replace the tip-sample junction) with a bandwidth of 3.03 kHz. To accomplish low voltage operation which is important in achieving high precision, low noise, and low interference with the strong magnetic field, the coarse approach is implemented with an inertial slider driven by the lateral bending of a piezoelectric scanner tube (PST) whose inner electrode is axially split into two for enhanced bending per volt. The PST can also drive the same sliding piece to inertial slide in the other bending direction (along the sample surface) of the PST, which realizes the large area searching ability. The STM head is housed in a three segment tubular chamber, which is detachable near the STM head for the convenience of sample and tip changes. Atomic resolution images of a graphite sample taken under 17.6 T and 18.0001 T are presented to show its performance. © 2012 American Institute of Physics

  20. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2015-01-01

    operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open

  1. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  2. Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm

    Science.gov (United States)

    Mavriplis, Dimitri J.

    1999-01-01

    The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.

  3. Power system reconfiguration in a radial distribution network for reducing losses and to improve voltage profile using modified plant growth simulation algorithm with Distributed Generation (DG

    Directory of Open Access Journals (Sweden)

    R. Rajaram

    2015-11-01

    Full Text Available Network reconfiguration which is constrained non linear optimization problem has been solved for loss minimization, load balancing, etc. for last two decades using various heuristic search evolutionary algorithms like binary particle swarm optimization, neuro-fuzzy techniques, etc. The contribution of this paper lies in considering distributed generation which are smaller power sources like solar photovoltaic cells or wind turbines connected in the customer roof top. This new connection in the radial network has made unidirectional current flow to become bidirectional there by increasing the efficiency but sometimes reducing stability of the system. Modified plant growth simulation algorithm has been applied here successfully to minimize real power loss because it does not require barrier factors or cross over rates because the objectives and constraints are dealt separately. The main advantage of this algorithm is continuous guiding search along with changing objective function because power from distributed generation is continuously varying so this can be applied for real time applications with required modifications. This algorithm here is tested for a standard 33 bus radial distribution system for loss minimization and test results here shows that this algorithm is efficient and suitable for real time applications.

  4. Algorithm for removing the noise from γ energy spectrum by analyzing the evolution of the wavelet transform maxima across scales

    International Nuclear Information System (INIS)

    Li Tianduo; Xiao Gang; Di Yuming; Han Feng; Qiu Xiaoling

    1999-01-01

    The γ energy spectrum is expanded in allied energy-frequency space. By the different characterization of the evolution of wavelet transform modulus maxima across scales between energy spectrum and noise, the algorithm for removing the noise from γ energy spectrum by analyzing the evolution of the wavelet transform maxima across scales is presented. The results show, in contrast to the methods in energy space or in frequency space, the method has the advantages that the peak of energy spectrum can be indicated accurately and the energy spectrum can be reconstructed with a good approximation

  5. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    Science.gov (United States)

    Bandyopadhyay, Saptarshi

    Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm

  6. Submillisievert Computed Tomography of the Chest Using Model-Based Iterative Algorithm: Optimization of Tube Voltage With Regard to Patient Size.

    Science.gov (United States)

    Deák, Zsuzsanna; Maertz, Friedrich; Meurer, Felix; Notohamiprodjo, Susan; Mueck, Fabian; Geyer, Lucas L; Reiser, Maximilian F; Wirth, Stefan

    The aim of this study was to define optimal tube potential for soft tissue and vessel visualization in dose-reduced chest CT protocols using model-based iterative algorithm in average and overweight patients. Thirty-six patients receiving chest CT according to 3 protocols (120 kVp/noise index [NI], 60; 100 kVp/NI, 65; 80 kVp/NI, 70) were included in this prospective study, approved by the ethics committee. Patients' physical parameters and dose descriptors were recorded. Images were reconstructed with model-based algorithm. Two radiologists evaluated image quality and lesion conspicuity; the protocols were intraindividually compared with preceding control CT reconstructed with statistical algorithm (120 kVp/NI, 20). Mean and standard deviation of attenuation of the muscle and fat tissues and signal-to-noise ratio of the aorta were measured. Diagnostic images (lesion conspicuity, 95%-100%) were acquired in average and overweight patients at 1.34, 1.02, and 1.08 mGy and at 3.41, 3.20, and 2.88 mGy at 120, 100, and 80 kVp, respectively. Data are given as CT dose index volume values. Model-based algorithm allows for submillisievert chest CT in average patients; the use of 100 kVp is recommended.

  7. RECONFIGURACIÓN DE REDES ELÉCTRICAS DE MEDIA TENSIÓN BASADA EN EL ALGORITMO DE PRIM RECONFIGURATION OF MEDIUM VOLTAGE NETWORKS BASED ON PRIM'S ALGORITHM

    Directory of Open Access Journals (Sweden)

    Angely Cárcamo-Gallardo

    2007-04-01

    Full Text Available En este trabajo se presenta un nuevo algoritmo que permite reconfigurar un sistema de distribución (SD de energía eléctrica minimizando la energía no suministrada (ENS. El SD se modela utilizando teoría de grafos, mientras que la ENS se formula recursivamente y se parametriza en términos de los índices de confiabilidad del SD. Empleando esta modelación se transforma el problema de optimización en el problema de encontrar el árbol de mínima expansión (AME a partir del grafo que modela al SD, donde la métrica de distancia utilizada corresponde a la ENS a cada nodo del SD. Para encontrar de manera eficiente el AME se utiliza el algoritmo de Prim, ya que pertenece a la clase de algoritmos voraces en el cálculo del AME. Adicionalmente, se propone un algoritmo que realiza una revisión del AME obtenido analizando las topologías que fueron descartadas aleatoriamente durante el proceso de decisión. El desempeño del algoritmo de optimización se evalúa en sistemas de pruebas y en dos sistemas eléctricos reales.This paper presents a novel algorithm to reconfigure an electric power distribution network (EPDN, minimizing its non-supplied energy (NSE. The EPDN is modeled using graph theory and the NSE is recursively formulated in terms of the reliability parameters of the EPDN. Based on this mathematical model, we transform the original optimization problem into the graph theory problem of finding the minimum spanning tree (MST of a given graph, which models the EPDN. The distance metric employed by the searching algorithm is the NSE. In order to efficiently find the MST, Prim's algorithm is employed due to is greedy search behavior. In addition, a backtracking algorithm is used to check the MST obtained. The backtracking algorithm analyzes all the candidate topologies that were randomly discarded during the decision process. The performance of the optimization algorithm is evaluated using testing systems and two actual EPDNs.

  8. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis

  9. Efficient synthesis of large-scale thinned arrays using a density-taper initialised genetic algorithm

    CSIR Research Space (South Africa)

    Du Plessis, WP

    2011-09-01

    Full Text Available The use of the density-taper approach to initialise a genetic algorithm is shown to give excellent results in the synthesis of thinned arrays. This approach is shown to give better SLL values more consistently than using random values and difference...

  10. Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)

    2007-01-01

    htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we

  11. Long-Term Scheduling of Large-Scale Cascade Hydropower Stations Using Improved Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaohao Wen

    2018-03-01

    Full Text Available Long-term scheduling of large cascade hydropower stations (LSLCHS is a complex problem of high dimension, nonlinearity, coupling and complex constraint. In view of the above problem, we present an improved differential evolution (iLSHADE algorithm based on LSHADE, a state-of-the-art evolutionary algorithm. iLSHADE uses new mutation strategies “current to pbest/2-rand” to obtain wider search range and accelerate convergence with the preventing individual repeated failure evolution (PIRFE strategy. The handling of complicated constraints strategy of ε-constrained method is presented to handle outflow, water level and output constraints in the cascade reservoir operation. Numerical experiments of 10 benchmark functions have been done, showing that iLSHADE has stable convergence and high efficiency. Furthermore, we demonstrate the performance of the iLSHADE algorithm by comparing it with other improved differential evolution algorithms for LSLCHS in four large hydropower stations of the Jinsha River. With the applications of iLSHADE in reservoir operation, LSLCHS can obtain more power generation benefit than other alternatives in dry, normal, and wet years. The results of numerical experiments and case studies show that the iLSHADE has a distinct optimization effect and good stability, and it is a valid and reliable tool to solve LSLCHS problem.

  12. A hybrid Genetic and Simulated Annealing Algorithm for Chordal Ring implementation in large-scale networks

    DEFF Research Database (Denmark)

    Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup

    2011-01-01

    The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology of the...

  13. Technical Report on the 6th Time Scale Algorithm Symposium and Tutorials

    Science.gov (United States)

    2016-03-29

    34Optimal Stopping" and Dr. Paul- Henning Kamp on the "Improved NTP Timekeeping" . The Symposium includes also 25 contributions on different topics...break SESSION VII - NTP Algorithms 15:50 16:30 Invited talk- Paul- Henning Kamp "Improved NTP Timekeeping" 16:30 16:50 An Auto-Regressive Moving-Average

  14. An Algorithm Creating Thumbnail for Web Map Services Based on Information Entropy and Trans-scale Similarity

    Directory of Open Access Journals (Sweden)

    CHENG Xiaoqiang

    2017-11-01

    Full Text Available Thumbnail can greatly increase the efficiency of browsing pictures,videos and other image resources and improve the user experience prominently. Map service is a kind of graphic resource coupling spatial information and representation scale,its crafting,retrieval and management will not function well without the support of thumbnail. Sophisticated designed thumbnails bring users vivid first impressions and help users make efficient exploration. On the contrast,coarse thumbnail cause negative emotion and discourage users to explore the map service positively. Inspired by video summarization,key position and key scale of web map service were proposed. Meanwhile,corresponding quantitative measures and an automatic algorithm were drawn up and implemented. With the help of this algorithm,poor visual quality,lack of map information and low automation of current thumbnails was solved successfully. Information entropy was used to determine areas richer in content and tran-scale similarity was calculated to judge at which scale the appearance of the map service has changed drastically,and finally a series of static pictures were extracted which can represent the content of the map service. Experimental results show that this method produced medium-sized,content-rich and well-representative thumbnails which effectively reflect the content and appearance of map service.

  15. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  16. Theory and algorithms for solving large-scale numerical problems. Application to the management of electricity production

    International Nuclear Information System (INIS)

    Chiche, A.

    2012-01-01

    This manuscript deals with large-scale optimization problems, and more specifically with solving the electricity unit commitment problem arising at EDF. First, we focused on the augmented Lagrangian algorithm. The behavior of that algorithm on an infeasible convex quadratic optimization problem is analyzed. It is shown that the algorithm finds a point that satisfies the shifted constraints with the smallest possible shift in the sense of the Euclidean norm and that it minimizes the objective on the corresponding shifted constrained set. The convergence to such a point is realized at a global linear rate, which depends explicitly on the augmentation parameter. This suggests us a rule for determining the augmentation parameter to control the speed of convergence of the shifted constraint norm to zero. This rule has the advantage of generating bounded augmentation parameters even when the problem is infeasible. As a by-product, the algorithm computes the smallest translation in the Euclidean norm that makes the constraints feasible. Furthermore, this work provides solution methods for stochastic optimization industrial problems decomposed on a scenario tree, based on the progressive hedging algorithm introduced by [Rockafellar et Wets, 1991]. We also focus on the convergence of that algorithm. On the one hand, we offer a counter-example showing that the algorithm could diverge if its augmentation parameter is iteratively updated. On the other hand, we show how to recover the multipliers associated with the non-dualized constraints defined on the scenario tree from those associated with the corresponding constraints of the scenario subproblems. Their convergence is also analyzed for convex problems. The practical interest of theses solutions techniques is corroborated by numerical experiments performed on the electric production management problem. We apply the progressive hedging algorithm to a realistic industrial problem. More precisely, we solve the French medium

  17. Voltage Unbalance Compensation with Smart Three-phase Loads

    DEFF Research Database (Denmark)

    Douglass, Philip; Trintis, Ionut; Munk-Nielsen, Stig

    2016-01-01

    unbalance originating in the power supply network. Two variants of the algorithm are tested: first, using phase-neutral voltage as input, second, using phase-phase voltage. The control algorithm is described, and evaluated in simulations and laboratory tests. Two metrics for quantifying voltage unbalance...... are evaluated: one metric based on the maximum deviation of RMS phaseneutral voltage from the average voltage and one metric based on negative sequence voltage. The tests show that controller that uses phase-neutral voltage as input can in most cases eliminate the deviations of phase voltage from the average...... is caused by asymmetrical loads. These results suggest that the optimal algorithm to reduce system unbalance depends on which system parameter is most important: phase-neutral voltage unbalance, phase-phase voltage unbalance, or current unbalance....

  18. Multicontroller: an object programming approach to introduce advanced control algorithms for the GCS large scale project

    CERN Document Server

    Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department

    2007-01-01

    The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...

  19. A robust and fast generic voltage sag detection technique

    DEFF Research Database (Denmark)

    L. Dantas, Joacillo; Lima, Francisco Kleber A.; Branco, Carlos Gustavo C.

    2015-01-01

    In this paper, a fast and robust voltage sag detection algorithm, named VPS2D, is introduced. Using the DSOGI, the algorithm creates a virtual positive sequence voltage and monitories the fundamental voltage component of each phase. After calculating the aggregate value in the o:;3-reference fram...

  20. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-03-01

    Full Text Available A sub-block algorithm is usually applied in the super-resolution (SR reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  1. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    Science.gov (United States)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on

  2. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  3. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  4. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Science.gov (United States)

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  5. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available In this paper, the Hager and Zhang (HZ conjugate gradient (CG method and the modified HZ (MHZ CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables.

  6. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    KAUST Repository

    Hasanov, Khalid

    2014-01-01

    There has been a significant research in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research works are done to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of Grid’5000 platform are presented.

  7. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    Science.gov (United States)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  8. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    Directory of Open Access Journals (Sweden)

    Sagar Indurkhya

    Full Text Available ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1 a small number of reactions tend to occur a disproportionately large percentage of the time, and (2 a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  9. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    Science.gov (United States)

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  10. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Kosar, Tevfik

    2010-05-20

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.

  11. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    Science.gov (United States)

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  12. Multi-objective optimization of MOSFETs channel widths and supply voltage in the proposed dual edge-triggered static D flip-flop with minimum average power and delay by using fuzzy non-dominated sorting genetic algorithm-II.

    Science.gov (United States)

    Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl

    2016-01-01

    D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.

  13. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  14. Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Santos, Sandra A.; Sorensen, Danny C.

    2008-01-01

    A MATLAB 6.0 implementation of the LSTRS method is resented. LSTRS was described in Rojas, M., Santos, S.A., and Sorensen, D.C., A new matrix-free method for the large-scale trust-region subproblem, SIAM J. Optim., 11(3):611-646, 2000. LSTRS is designed for large-scale quadratic problems with one...... at each step. LSTRS relies on matrix-vector products only and has low and fixed storage requirements, features that make it suitable for large-scale computations. In the MATLAB implementation, the Hessian matrix of the quadratic objective function can be specified either explicitly, or in the form...... of a matrix-vector multiplication routine. Therefore, the implementation preserves the matrix-free nature of the method. A description of the LSTRS method and of the MATLAB software, version 1.2, is presented. Comparisons with other techniques and applications of the method are also included. A guide...

  15. The WACMOS-ET project – Part 1: Tower-scale evaluation of four remote sensing-based evapotranspiration algorithms

    KAUST Repository

    Michel, D.

    2015-10-20

    The WACMOS-ET project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run 4 established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODIS evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in-situ meteorological data from 24 FLUXNET towers was used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed across several time scales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement to the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R2 = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R2 = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs re-sampled to a common grid to facilitate global estimates) confirmed the original findings.

  16. Combining soft decision algorithms and scale-sequential hypotheses pruning for object recognition

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, V.P.; Manolakos, E.S. [Northeastern Univ., Boston, MA (United States)

    1996-12-31

    This paper describes a system that exploits the synergy of Hierarchical Mixture Density (HMD) estimation with multiresolution decomposition based hypothesis pruning to perform efficiently joint segmentation and labeling of partially occluded objects in images. First we present the overall structure of the HMD estimation algorithm in the form of a recurrent neural network which generates the posterior probabilities of the various hypotheses associated with the image. Then in order to reduce the large memory and computation requirement we propose a hypothesis pruning scheme making use of the orthonormal discrete wavelet transform for dimensionality reduction. We provide an intuitive justification for the validity of this scheme and present experimental results and performance analysis on real and synthetic images to verify our claims.

  17. Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.

    Science.gov (United States)

    Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P

    2015-01-01

    Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.

  18. Final report for “Extreme-scale Algorithms and Solver Resilience”

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, William Douglas [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2017-06-30

    This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility. We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.

  19. Scaling tests of a new algorithm for DFT hybrid-functional calculations on Trinity Haswell

    Energy Technology Data Exchange (ETDEWEB)

    Wright, Alan F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Modine, Normand A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    We show scaling results for materials of interest in Sandia Radiation-Effects and High-Energy-Density-Physics Mission Areas. Each timing is from a self-consistent calculation for bulk material. Two timings are given: (1) walltime for the construction of the CR exchange operator (Exchange-Operator) and (2) walltime for everything else (non-Exchange-Operator).

  20. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales

    Directory of Open Access Journals (Sweden)

    Jihoon Oh

    2017-09-01

    Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  1. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales.

    Science.gov (United States)

    Oh, Jihoon; Yun, Kyongsik; Hwang, Ji-Hyun; Chae, Jeong-Ho

    2017-01-01

    Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders ( N  = 573) were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements) and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC) was the highest for 1-month suicide attempts detection (0.93), followed by lifetime (0.89), and 1-year detection (0.87). Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87). Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  2. Hybrid metaheuristic optimization algorithm for strategic planning of {4D} aircraft trajectories at the continent scale

    OpenAIRE

    Chaimatanan , Supatcha; Delahaye , Daniel; Mongeau , Marcel

    2014-01-01

    International audience; Global air-traffic demand is continuously increasing. To handle such a tremendous traffic volume while maintaining at least the same level of safety, a more efficient strategic trajectory planning is necessary. In this work, we present a strategic trajectory planning methodology which aims to minimize interaction between aircraft at the European-continent scale. In addition, we propose a preliminary study that takes into account uncertainties of aircraft positions in t...

  3. Automated seismic detection of landslides at regional scales: a Random Forest based detection algorithm

    Science.gov (United States)

    Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.

    2017-12-01

    Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years

  4. Parallel algorithms for large-scale biological sequence alignment on Xeon-Phi based clusters.

    Science.gov (United States)

    Lan, Haidong; Chan, Yuandong; Xu, Kai; Schmidt, Bertil; Peng, Shaoliang; Liu, Weiguo

    2016-07-19

    Computing alignments between two or more sequences are common operations frequently performed in computational molecular biology. The continuing growth of biological sequence databases establishes the need for their efficient parallel implementation on modern accelerators. This paper presents new approaches to high performance biological sequence database scanning with the Smith-Waterman algorithm and the first stage of progressive multiple sequence alignment based on the ClustalW heuristic on a Xeon Phi-based compute cluster. Our approach uses a three-level parallelization scheme to take full advantage of the compute power available on this type of architecture; i.e. cluster-level data parallelism, thread-level coarse-grained parallelism, and vector-level fine-grained parallelism. Furthermore, we re-organize the sequence datasets and use Xeon Phi shuffle operations to improve I/O efficiency. Evaluations show that our method achieves a peak overall performance up to 220 GCUPS for scanning real protein sequence databanks on a single node consisting of two Intel E5-2620 CPUs and two Intel Xeon Phi 7110P cards. It also exhibits good scalability in terms of sequence length and size, and number of compute nodes for both database scanning and multiple sequence alignment. Furthermore, the achieved performance is highly competitive in comparison to optimized Xeon Phi and GPU implementations. Our implementation is available at https://github.com/turbo0628/LSDBS-mpi .

  5. A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric

    Science.gov (United States)

    Chaowen, Liu; Jingping, Xu; Lu, Liu; Hanhan, Lu; Yuan, Huang

    2016-02-01

    A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. Project supported by the National Natural Science Foundation of China (No. 61176100).

  6. A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric

    International Nuclear Information System (INIS)

    Liu Chaowen; Xu Jingping; Liu Lu; Lu Hanhan; Huang Yuan

    2016-01-01

    A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. (paper)

  7. Distributed Monitoring of Voltage Collapse Sensitivity Indices

    OpenAIRE

    Simpson-Porco, John W.; Bullo, Francesco

    2016-01-01

    The assessment of voltage stability margins is a promising direction for wide-area monitoring systems. Accurate monitoring architectures for long-term voltage instability are typically centralized and lack scalability, while completely decentralized approaches relying on local measurements tend towards inaccuracy. Here we present distributed linear algorithms for the online computation of voltage collapse sensitivity indices. The computations are collectively performed by processors embedded ...

  8. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    Science.gov (United States)

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  9. Sensing across large-scale cognitive radio networks: Data processing, algorithms, and testbed for wireless tomography and moving target tracking

    Science.gov (United States)

    Bonior, Jason David

    As the use of wireless devices has become more widespread so has the potential for utilizing wireless networks for remote sensing applications. Regular wireless communication devices are not typically designed for remote sensing. Remote sensing techniques must be carefully tailored to the capabilities of these networks before they can be applied. Experimental verification of these techniques and algorithms requires robust yet flexible testbeds. In this dissertation, two experimental testbeds for the advancement of research into sensing across large-scale cognitive radio networks are presented. System architectures, implementations, capabilities, experimental verification, and performance are discussed. One testbed is designed for the collection of scattering data to be used in RF and wireless tomography research. This system is used to collect full complex scattering data using a vector network analyzer (VNA) and amplitude-only data using non-synchronous software-defined radios (SDRs). Collected data is used to experimentally validate a technique for phase reconstruction using semidefinite relaxation and demonstrate the feasibility of wireless tomography. The second testbed is a SDR network for the collection of experimental data. The development of tools for network maintenance and data collection is presented and discussed. A novel recursive weighted centroid algorithm for device-free target localization using the variance of received signal strength for wireless links is proposed. The signal variance resulting from a moving target is modeled as having contours related to Cassini ovals. This model is used to formulate recursive weights which reduce the influence of wireless links that are farther from the target location estimate. The algorithm and its implementation on this testbed are presented and experimental results discussed.

  10. Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Elizondo, Marcelo A.; Samaan, Nader A.

    2017-10-19

    Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage control problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.

  11. Heuristic algorithm for determination of local properties of scale-free networks

    CERN Document Server

    Mitrovic, M

    2006-01-01

    Complex networks are everywhere. Many phenomena in nature can be modeled as networks: - brain structures - protein-protein interaction networks - social interactions - the Internet and WWW. They can be represented in terms of nodes and edges connecting them. Important characteristics: - these networks are not random; they have a structured architecture. Structure of different networks are similar: - all have power law degree distribution (scale-free property) - despite large size there is usually relatively short path between any two nodes (small world property). Global characteristics: - degree distribution, clustering coefficient and the diameter. Local structure: - frequency of subgraphs of given type (subgraph of order k is a part of the network consisting of k nodes and edges between them). There are different types of subgraphs of the same order.

  12. Redesigned-Scale-Free CORDIC Algorithm Based FPGA Implementation of Window Functions to Minimize Area and Latency

    Directory of Open Access Journals (Sweden)

    Supriya Aggarwal

    2012-01-01

    Full Text Available One of the most important steps in spectral analysis is filtering, where window functions are generally used to design filters. In this paper, we modify the existing architecture for realizing the window functions using CORDIC processor. Firstly, we modify the conventional CORDIC algorithm to reduce its latency and area. The proposed CORDIC algorithm is completely scale-free for the range of convergence that spans the entire coordinate space. Secondly, we realize the window functions using a single CORDIC processor as against two serially connected CORDIC processors in existing technique, thus optimizing it for area and latency. The linear CORDIC processor is replaced by a shift-add network which drastically reduces the number of pipelining stages required in the existing design. The proposed design on an average requires approximately 64% less pipeline stages and saves up to 44.2% area. Currently, the processor is designed to implement Blackman windowing architecture, which with slight modifications can be extended to other widow functions as well. The details of the proposed architecture are discussed in the paper.

  13. The development of small-scale mechanization means positioning algorithm using radio frequency identification technology in industrial plants

    Science.gov (United States)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems for small mechanization in industrial plants based on radio frequency identification methods, which will be the basis for creating highly efficient intelligent systems for controlling the product movement in industrial enterprises. The main standards that are applied in the field of product movement control automation and radio frequency identification are considered. The article reviews modern publications and automation systems for the control of product movement developed by domestic and foreign manufacturers. It describes the developed algorithm for positioning of small-scale mechanization means in an industrial enterprise. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  14. Artifical Excitation of Ferro-Resonance for Testing Electrotechnical Equipment in Distribution Devices with Increased Voltage

    Directory of Open Access Journals (Sweden)

    Ye. V. Dmitriev

    2006-01-01

    Full Text Available On the basis of the developed device for protection against of ferro-resonant and high-frequency cumulative over-voltages an algorithm for obtaining a voltage imitating ferro-resonant over-voltages is proposed in the paper. This algorithm presupposes to apply a voltage to the secondary transformer side from an extraneous source is proposed.

  15. Spectral algorithms for multiple scale localized eigenfunctions in infinitely long, slightly bent quantum waveguides

    Science.gov (United States)

    Boyd, John P.; Amore, Paolo; Fernández, Francisco M.

    2018-03-01

    A "bent waveguide" in the sense used here is a small perturbation of a two-dimensional rectangular strip which is infinitely long in the down-channel direction and has a finite, constant width in the cross-channel coordinate. The goal is to calculate the smallest ("ground state") eigenvalue of the stationary Schrödinger equation which here is a two-dimensional Helmholtz equation, ψxx +ψyy + Eψ = 0 where E is the eigenvalue and homogeneous Dirichlet boundary conditions are imposed on the walls of the waveguide. Perturbation theory gives a good description when the "bending strength" parameter ɛ is small as described in our previous article (Amore et al., 2017) and other works cited therein. However, such series are asymptotic, and it is often impractical to calculate more than a handful of terms. It is therefore useful to develop numerical methods for the perturbed strip to cover intermediate ɛ where the perturbation series may be inaccurate and also to check the pertubation expansion when ɛ is small. The perturbation-induced change-in-eigenvalue, δ ≡ E(ɛ) - E(0) , is O(ɛ2) . We show that the computation becomes very challenging as ɛ → 0 because (i) the ground state eigenfunction varies on both O(1) and O(1 / ɛ) length scales and (ii) high accuracy is needed to compute several correct digits in δ, which is itself small compared to the eigenvalue E. The multiple length scales are not geographically separate, but rather are inextricably commingled in the neighborhood of the boundary deformation. We show that coordinate mapping and immersed boundary strategies both reduce the computational domain to the uniform strip, allowing application of pseudospectral methods on tensor product grids with tensor product basis functions. We compared different basis sets; Chebyshev polynomials are best in the cross-channel direction. However, sine functions generate rather accurate analytical approximations with just a single basis function. In the down

  16. LOFT voltage insertion calibaration program

    International Nuclear Information System (INIS)

    Tillitt, D.N.; Miyasaki, F.S.

    1975-08-01

    The Loss-of-Fluid Test (LOFT) Facility is an experimental facility built around a ''scaled'' version of a large pressurized water reactor (LPWR). Part of this facility is the Data Acquisition and Visual Display System (DAVDS) as defined by the LOFT System Design Document SDD 1.4.2C. The DAVDS has a 702 data channel recording capability of which 548 are recorded digitally. The DAVDS also contains a Voltage Insertion Calibration Subsystem used to inject precise and known voltage steps into the recording systems. The computer program that controls the Voltage Insertion Calibration Subsystem is presented. 7 references. (auth)

  17. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation.

    Science.gov (United States)

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-11-25

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.

  18. Delft-FEWS:A Decision Making Platform to Intergrate Data, Model, Algorithm for Large-Scale River Basin Water Management

    Science.gov (United States)

    Yang, T.; Welles, E.

    2017-12-01

    In this paper, we introduce a flood forecasting and decision making platform, named Delft-FEWS, which has been developed over years at the Delft Hydraulics and now at Deltares. The philosophy of Delft-FEWS is to provide water managers and operators with an open shell tool, which allows the integratation of a variety of hydrological, hydraulics, river routing, and reservoir models with hydrometerological forecasts data. Delft-FEWS serves as an powerful tool for both basin-scale and national-scale water resources management. The essential novelty of Delft-FEWS is to change the flood forecasting and water resources management from a single model or agency centric paradigm to a intergrated framework, in which different model, data, algorithm and stakeholders are strongly linked together. The paper will start with the challenges in water resources managment, and the concept and philosophy of Delft-FEWS. Then, the details of data handling and linkages of Delft-FEWS with different hydrological, hydraulic, and reservoir models, etc. Last, several cases studies and applications of Delft-FEWS will be demonstrated, including the National Weather Service and the Bonneville Power Administration in USA, and a national application in the water board in the Netherland.

  19. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  20. Coupling graph perturbation theory with scalable parallel algorithms for large-scale enumeration of maximal cliques in biological graphs

    International Nuclear Information System (INIS)

    Samatova, N F; Schmidt, M C; Hendrix, W; Breimyer, P; Thomas, K; Park, B-H

    2008-01-01

    Data-driven construction of predictive models for biological systems faces challenges from data intensity, uncertainty, and computational complexity. Data-driven model inference is often considered a combinatorial graph problem where an enumeration of all feasible models is sought. The data-intensive and the NP-hard nature of such problems, however, challenges existing methods to meet the required scale of data size and uncertainty, even on modern supercomputers. Maximal clique enumeration (MCE) in a graph derived from such biological data is often a rate-limiting step in detecting protein complexes in protein interaction data, finding clusters of co-expressed genes in microarray data, or identifying clusters of orthologous genes in protein sequence data. We report two key advances that address this challenge. We designed and implemented the first (to the best of our knowledge) parallel MCE algorithm that scales linearly on thousands of processors running MCE on real-world biological networks with thousands and hundreds of thousands of vertices. In addition, we proposed and developed the Graph Perturbation Theory (GPT) that establishes a foundation for efficiently solving the MCE problem in perturbed graphs, which model the uncertainty in the data. GPT formulates necessary and sufficient conditions for detecting the differences between the sets of maximal cliques in the original and perturbed graphs and reduces the enumeration time by more than 80% compared to complete recomputation

  1. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  2. Voltage Control Scheme with Distributed Generation and Grid Connected Converter in a DC Microgrid

    Directory of Open Access Journals (Sweden)

    Jong-Chan Choi

    2014-10-01

    Full Text Available Direct Current (DC microgrids are expected to become larger due to the rapid growth of DC energy sources and power loads. As the scale of the system expends, the importance of voltage control will be increased to operate power systems stably. Many studies have been performed on voltage control methods in a DC microgrid, but most of them focused only on a small scale microgrid, such as a building microgrid. Therefore, a new control method is needed for a middle or large scale DC microgrid. This paper analyzes voltage drop problems in a large DC microgrid and proposes a cooperative voltage control scheme with a distributed generator (DG and a grid connected converter (GCC. For the voltage control with DGs, their location and capacity should be considered for economic operation in the systems. Accordingly, an optimal DG allocation algorithm is proposed to minimize the capacity of a DG for voltage control in DC microgrids. The proposed methods are verified with typical load types by a simulation using MATLAB and PSCAD/EMTDC.

  3. Voltage regulating circuit

    NARCIS (Netherlands)

    2005-01-01

    A voltage regulating circuit comprising a rectifier (2) for receiving an AC voltage (Vmains) and for generating a rectified AC voltage (vrec), and a capacitor (3) connected in parallel with said rectified AC voltage for providing a DC voltage (VDC) over a load (5), characterized by a unidirectional

  4. Evaluation of clustering algorithms at the < 1 GeV energy scale for the electromagnetic calorimeter of the PADME experiment

    Science.gov (United States)

    Leonardi, E.; Piperno, G.; Raggi, M.

    2017-10-01

    A possible solution to the Dark Matter problem postulates that it interacts with Standard Model particles through a new force mediated by a “portal”. If the new force has a U(1) gauge structure, the “portal” is a massive photon-like vector particle, called dark photon or A’. The PADME experiment at the DAΦNE Beam-Test Facility (BTF) in Frascati is designed to detect dark photons produced in positron on fixed target annihilations decaying to dark matter (e+e-→γA‧) by measuring the final state missing mass. One of the key roles of the experiment will be played by the electromagnetic calorimeter, which will be used to measure the properties of the final state recoil γ. The calorimeter will be composed by 616 21×21×230 mm3 BGO crystals oriented with the long axis parallel to the beam direction and disposed in a roughly circular shape with a central hole to avoid the pile up due to the large number of low angle Bremsstrahlung photons. The total energy and position of the electromagnetic shower generated by a photon impacting on the calorimeter can be reconstructed by collecting the energy deposits in the cluster of crystals interested by the shower. In PADME we are testing two different clustering algorithms, PADME-Radius and PADME-Island, based on two complementary strategies. In this paper we will describe the two algorithms, with the respective implementations, and report on the results obtained with them at the PADME energy scale (< 1 GeV), both with a GEANT4 based simulation and with an existing 5×5 matrix of BGO crystals tested at the DAΦNE BTF.

  5. Review of Voltage Flicker Estimation Algorithms

    OpenAIRE

    Abhijith Augustine; Dr. T. Ruban Deva Prakash

    2014-01-01

    The quality of electric power is of supreme importance to electrical utilities and their customers. Modern equipments are more sensitive to power system anomalies than in the past. Microprocessor based controls and power electronics devices are sensitive to many types of disturbances. Minor power disruptions, which once would have been noticed only as a momentary flickering of the lights, may now completely interrupt whole automated factories because of sensitive electronic controllers or mak...

  6. Near-Threshold Computing and Minimum Supply Voltage of Single-Rail MCML Circuits

    Directory of Open Access Journals (Sweden)

    Ruiping Cao

    2014-01-01

    Full Text Available In high-speed applications, MOS current mode logic (MCML is a good alternative. Scaling down supply voltage of the MCML circuits can achieve low power-delay product (PDP. However, the current almost all MCML circuits are realized with dual-rail scheme, where the NMOS configuration in series limits the minimum supply voltage. In this paper, single-rail MCML (SRMCML circuits are described, which can avoid the devices configuration in series, since their logic evaluation block can be realized by only using MOS devices in parallel. The relationship between the minimum supply voltage of the SRMCML circuits and the model parameters of MOS transistors is derived, so that the minimum supply voltage can be estimated before circuit designs. An MCML dynamic flop-flop based on SRMCML is also proposed. The optimization algorithm for near-threshold sequential circuits is presented. A near-threshold SRMCML mode-10 counter based on the optimization algorithm is verified. Scaling down the supply voltage of the SRMCML circuits is also investigated. The power dissipation, delay, and power-delay products of these circuits are carried out. The results show that the near-threshold SRMCML circuits can obtain low delay and small power-delay product.

  7. Cavity Voltage Phase Modulation MD

    CERN Document Server

    Mastoridis, Themistoklis; Molendijk, John; Timko, Helga; CERN. Geneva. ATS Department

    2016-01-01

    The LHC RF/LLRF system is currently configured for extremely stable RF voltage to minimize transient beam loading effects. The present scheme cannot be extended beyond nominal beam current since the demanded power would exceed the peak klystron power and lead to saturation. A new scheme has therefore been proposed: for beam currents above nominal (and possibly earlier), the cavity phase modulation by the beam will not be corrected (transient beam loading), but the strong RF feedback and One-Turn Delay feedback will still be active for loop and beam stability in physics. To achieve this, the voltage set point will be adapted for each bunch. The goal of this MD was to test a new algorithm that would adjust the voltage set point to achieve the cavity phase modulation that would minimize klystron forward power.

  8. A massively parallel algorithm for the solution of constrained equations of motion with applications to large-scale, long-time molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fijany, A. [Jet Propulsion Lab., Pasadena, CA (United States); Coley, T.R. [Virtual Chemistry, Inc., San Diego, CA (United States); Cagin, T.; Goddard, W.A. III [California Institute of Technology, Pasadena, CA (United States)

    1997-12-31

    Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does not scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.

  9. Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms

    Science.gov (United States)

    Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.

    2009-01-01

    For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.

  10. A modified Symbiotic Organisms Search algorithm for large scale economic dispatch problem with valve-point effects

    International Nuclear Information System (INIS)

    Secui, Dinu Calin

    2016-01-01

    This paper proposes a new metaheuristic algorithm, called Modified Symbiotic Organisms Search (MSOS) algorithm, to solve the economic dispatch problem considering the valve-point effects, the prohibited operating zones (POZ), the transmission line losses, multi-fuel sources, as well as other operating constraints of the generating units and power system. The MSOS algorithm introduces, in all of its phases, new relations to update the solutions to improve its capacity of identifying stable and of high-quality solutions in a reasonable time. Furthermore, to increase the capacity of exploring the MSOS algorithm in finding the most promising zones, it is endowed with a chaotic component generated by the Logistic map. The performance of the modified algorithm and of the original algorithm Symbiotic Organisms Search (SOS) is tested on five systems of different characteristics, constraints and dimensions (13-unit, 40-unit, 80-unit, 160-unit and 320-unit). The results obtained by applying the proposed algorithm (MSOS) show that this has a better performance than other techniques of optimization recently used in solving the economic dispatch problem with valve-point effects. - Highlights: • A new modified SOS algorithm (MSOS) is proposed to solve the EcD problem. • Valve-point effects, ramp-rate limits, POZ, multi-fuel sources, transmission losses were considered. • The algorithm is tested on five systems having 13, 40, 80, 160 and 320 thermal units. • MSOS algorithm outperforms many other optimization techniques.

  11. Transmission congestion and voltage profile management coordination in competitive electricity markets

    International Nuclear Information System (INIS)

    Yamin, H.Y.; Shahidehpour, S.M.

    2003-01-01

    This paper describes a generalized active/reactive iterative coordination process between GENCOs and the Independent System Operator (ISO) for active (transmission congestion) and reactive (voltage profile) management in the day-ahead market. GENCOs apply priced-based unit commitment without transmission and voltage security constraints, schedule their units and submit their initial bids to the ISO. The ISO executes congestion and voltage profile management for eliminating transmission and voltage profile violations. If violations are not eliminated, the ISO minimizes the transmission and voltage profile violations and sends a signal via the Internet to GENCOs. GENCOs reschedule their units taking into account the ISO signals and submit modified bids to the ISO. The voltage problem is addressed and a linear model is formulated and used in the proposed method. The voltage problem is formulated as a linear programming with a block-angular structure and Dantzig-Wolfe decomposition is applied to generate several smaller problems for a faster and easier solution of large-scale power systems. Two 36 unit GENCOs are used to demonstrate the performance of the proposed generalized active/reactive coordination algorithm. (author)

  12. Your choice MATor(s) : large-scale quantitative anonymity assessment of Tor path selection algorithms against structural attacks

    OpenAIRE

    Backes, Michael; Meiser, Sebastian; Slowik, Marcin

    2015-01-01

    In this paper, we present a rigorous methodology for quantifying the anonymity provided by Tor against a variety of structural attacks, i.e., adversaries that compromise Tor nodes and thereby perform eavesdropping attacks to deanonymize Tor users. First, we provide an algorithmic approach for computing the anonymity impact of such structural attacks against Tor. The algorithm is parametric in the considered path selection algorithm and is, hence, capable of reasoning about variants of Tor and...

  13. Prediction of SOC content by Vis-NIR spectroscopy at European scale using a modified local PLS algorithm

    Science.gov (United States)

    Nocita, M.; Stevens, A.; Toth, G.; van Wesemael, B.; Montanarella, L.

    2012-12-01

    under grassland, with a root mean square error (RMSE) of 3.6 and 7.2 g C kg-1 respectively, while mineral soils under woodland and organic soils predictions were less accurate (RMSE of 11.9 and 51.1 g C kg-1). The RMSE was lower (except for organic soils) when sand content was used as covariate in the selection of the l-PLS predicting neighbours. The obtained results proved that: (i) Although the enormous spatial variability of European soils, the developed modified l-PLS algorithm was able to produce stable calibrations and accurate predictions. (ii) It is essential to invest in spectral libraries built according to sampling strategies, based on soil types, and a standardized laboratory protocol. (iii) Vis-NIR DRS spectroscopy is a powerful and cost effective tool to predict SOC content at regional/continental scales, and should be converted from a pure research discipline into a reference operational method decreasing the uncertainties of SOC monitoring and terrestrial ecosystems carbon fluxes at all scales.

  14. Voltage scheduling for low power/energy

    Science.gov (United States)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned

  15. The WACMOS-ET project – Part 1: Tower-scale evaluation of four remote sensing-based evapotranspiration algorithms

    KAUST Repository

    Michel, D.; Jimé nez, C.; Miralles, Diego G.; Jung, M.; Hirschi, M.; Ershadi, A.; Martens, B.; McCabe, Matthew; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Ferná ndez-Prieto, D.

    2015-01-01

    algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODIS evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition

  16. A Hybrid DV-Hop Algorithm Using RSSI for Localization in Large-Scale Wireless Sensor Networks.

    Science.gov (United States)

    Cheikhrouhou, Omar; M Bhatti, Ghulam; Alroobaea, Roobaea

    2018-05-08

    With the increasing realization of the Internet-of-Things (IoT) and rapid proliferation of wireless sensor networks (WSN), estimating the location of wireless sensor nodes is emerging as an important issue. Traditional ranging based localization algorithms use triangulation for estimating the physical location of only those wireless nodes that are within one-hop distance from the anchor nodes. Multi-hop localization algorithms, on the other hand, aim at localizing the wireless nodes that can physically be residing at multiple hops away from anchor nodes. These latter algorithms have attracted a growing interest from research community due to the smaller number of required anchor nodes. One such algorithm, known as DV-Hop (Distance Vector Hop), has gained popularity due to its simplicity and lower cost. However, DV-Hop suffers from reduced accuracy due to the fact that it exploits only the network topology (i.e., number of hops to anchors) rather than the distances between pairs of nodes. In this paper, we propose an enhanced DV-Hop localization algorithm that also uses the RSSI values associated with links between one-hop neighbors. Moreover, we exploit already localized nodes by promoting them to become additional anchor nodes. Our simulations have shown that the proposed algorithm significantly outperforms the original DV-Hop localization algorithm and two of its recently published variants, namely RSSI Auxiliary Ranging and the Selective 3-Anchor DV-hop algorithm. More precisely, in some scenarios, the proposed algorithm improves the localization accuracy by almost 95%, 90% and 70% as compared to the basic DV-Hop, Selective 3-Anchor, and RSSI DV-Hop algorithms, respectively.

  17. Effects of trap-assisted tunneling on gate-induced drain leakage in silicon-germanium channel p-type FET for scaled supply voltages

    Science.gov (United States)

    Tiwari, Vishal A.; Divakaruni, Rama; Hook, Terence B.; Nair, Deleep R.

    2016-04-01

    Silicon-germanium is considered as an alternative channel material to silicon p-type FET (pFET) for the development of energy efficient high performance transistors for 28 nm and beyond in a high-k metal gate technology because of its lower threshold voltage and higher mobility. However, gate-induced drain leakage (GIDL) is a concern for high threshold voltage device design because of tunneling at reduced bandgap. In this work, the trap-assisted tunneling and band-to-band tunneling (BTBT) effects on GIDL is analyzed and modeled for SiGe pFETs. Experimental results and Monte Carlo simulation results reveal that the pre-halo germanium pre-amorphization implant used to contain the short channel effects contribute to GIDL at the drain sidewall in addition to GIDL due to BTBT in SiGe devices. The results are validated by comparing the experimental observations with the numerical simulation and a set of calibrated models are used to describe the GIDL mechanisms for various drain and gate bias.

  18. Minimizing Harmonic Distortion Impact at Distribution System with Considering Large-Scale EV Load Behaviour Using Modified Lightning Search Algorithm and Pareto-Fuzzy Approach

    Directory of Open Access Journals (Sweden)

    S. N. Syed Nasir

    2018-01-01

    Full Text Available This research is focusing on optimal placement and sizing of multiple variable passive filter (VPF to mitigate harmonic distortion due to charging station (CS at 449 bus distribution network. There are 132 units of CS which are scheduled based on user behaviour within 24 hours, with the interval of 15 minutes. By considering the varying of CS patterns and harmonic impact, Modified Lightning Search Algorithm (MLSA is used to find 22 units of VPF coordination, so that less harmonics will be injected from 415 V bus to the medium voltage network and power loss is also reduced. Power system harmonic flow, VPF, CS, battery, and the analysis will be modelled in MATLAB/m-file platform. High Performance Computing (HPC is used to make simulation faster. Pareto-Fuzzy technique is used to obtain sizing of VPF from all nondominated solutions. From the result, the optimal placements and sizes of VPF are able to reduce the maximum THD for voltage and current and also the total apparent losses up to 39.14%, 52.5%, and 2.96%, respectively. Therefore, it can be concluded that the MLSA is suitable method to mitigate harmonic and it is beneficial in minimizing the impact of aggressive CS installation at distribution network.

  19. Semisupervised Community Detection by Voltage Drops

    Directory of Open Access Journals (Sweden)

    Min Ji

    2016-01-01

    Full Text Available Many applications show that semisupervised community detection is one of the important topics and has attracted considerable attention in the study of complex network. In this paper, based on notion of voltage drops and discrete potential theory, a simple and fast semisupervised community detection algorithm is proposed. The label propagation through discrete potential transmission is accomplished by using voltage drops. The complexity of the proposal is OV+E for the sparse network with V vertices and E edges. The obtained voltage value of a vertex can be reflected clearly in the relationship between the vertex and community. The experimental results on four real networks and three benchmarks indicate that the proposed algorithm is effective and flexible. Furthermore, this algorithm is easily applied to graph-based machine learning methods.

  20. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    International Nuclear Information System (INIS)

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-01-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm

  1. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, J.D., E-mail: jeffery.densmore@unnpp.gov [Bettis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Park, H., E-mail: hkpark@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Wollaber, A.B., E-mail: wollaber@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Rauenzahn, R.M., E-mail: rick@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Knoll, D.A., E-mail: nol@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States)

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  2. A prospective evaluation of the contrast, radiation dose and image quality of contrast-enhanced CT scans of paediatric abdomens using a low-concentration iodinated contrast agent and low tube voltage combined with 70% ASIR algorithm.

    Science.gov (United States)

    Wang, Xiaoxia; Zhong, Yumin; Hu, Liwei; Xue, Lianyan; Shi, Meihua; Qiu, Haisheng; Li, Jianying

    2016-09-01

    To quantitatively and subjectively assess the image quality of and radiation dose for an abdominal enhanced computed tomography (CT) scan with a low tube voltage and a low concentration of iodinated contrast agent in children. Forty-eight patients were randomised to one of the two following protocols: Group A (n=24, mean age 46.96±44.65 months, mean weight 15.71±9.11 kg, BMI 16.48±2.40 kg/m(2) ) and Group B (n=24, mean age 41.33±44.59 months, mean weight 18.15±17.67 kg, BMI 17.50±3.73 kg/m(2) ). Group A: 80 kVp tube voltage, 270 mg iodine (I)/mL contrast agent (Visipaque, GE Healthcare) and images were reconstructed using 70% adaptive statistical iterative reconstruction (ASIR). Group B: 100 kVp tube voltage, 370 mg I/mL contrast agent (Iopamiro, Bracco) and images were reconstructed using 50% ASIR. The volume of the contrast agent was 1.30 mL/kg in both Groups A and B. The degree of enhancement and noise in the abdominal aorta (AO) in the arterial phase (AP) and the portal vein (PV) in the portal venous phase (PVP) was measured; while the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) for the AO and PV were calculated. A 5-point scale was used to subjectively evaluate the image quality and image noise by two radiologists with more than 10 years of experience. Dose-length product (DLP) (mGy-cm) and CTDIvol (mGy) were calculated. Objective measurements and subjective quality scores for the two groups were compared using paired t-tests and Mann-Whitney U tests, respectively. There was no significant difference in age, weight or body mass index (BMI) between the two groups (all P>.5). The iodine load in Group A (5517.3±3197.2 mg I) was 37% lower than that in Group B (8772.1±8474.6 mg I), although there was no significant difference between them (P=.111). The DLP and the CT dose index (CTDIvol ) for Group A were also lower than for Group B, but were not statistically significantly different (DLP, 104 mGy-cm±45.81 vs 224.5

  3. High voltage systems

    International Nuclear Information System (INIS)

    Martin, M.

    1991-01-01

    Industrial processes usually require electrical power. This power is used to drive motors, to heat materials, or in electrochemical processes. Often the power requirements of a plant require the electric power to be delivered at high voltage. In this paper high voltage is considered any voltage over 600 V. This voltage could be as high as 138,000 V for some very large facilities. The characteristics of this voltage and the enormous amounts of power being transmitted necessitate special safety considerations. Safety must be considered during the four activities associated with a high voltage electrical system. These activities are: Design; Installation; Operation; and Maintenance

  4. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  5. A fast random walk algorithm for computing diffusion-weighted NMR signals in multi-scale porous media: A feasibility study for a Menger sponge

    International Nuclear Information System (INIS)

    Grebenkov, Denis S.; Nguyen, Hang T.; Li, Jing-Rebecca

    2013-01-01

    A fast random walk (FRW) algorithm is adapted to compute diffusion-weighted NMR signals in a Menger sponge which is formed by multiple channels of broadly distributed sizes and often considered as a model for soils and porous materials. The self-similar structure of a Menger sponge allows for rapid simulations that were not feasible by other numerical techniques. The role of multiple length scales on diffusion-weighted NMR signals is investigated. (authors)

  6. Voltage regulator for generator

    Energy Technology Data Exchange (ETDEWEB)

    Naoi, K

    1989-01-17

    It is an object of this invention to provide a voltage regulator for a generator charging a battery, wherein even if the ambient temperature at the voltage regulator rises abnormally high, possible thermal breakage of the semiconductor elements constituting the voltage regulator can be avoided. A feature of this invention is that the semiconductor elements can be protected from thermal breakage, even at an abnormal ambient temperature rise at the voltage regulator for the battery charging generator, by controlling a maximum conduction ratio of a power transistor in the voltage regulator in accordance with the temperature at the voltage regulator. This is achieved through a switching device connected in series to the field coil of the generator and adapted to be controlled in accordance with an output voltage of the generator and the ambient temperature at the voltage regulator. 6 figs.

  7. Shunt PWM advanced var compensators based on voltage source inverters for Facts applications

    Energy Technology Data Exchange (ETDEWEB)

    Barbosa, Pedro G; Misaka, Isamu; Watanabe, Edson H [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1994-12-31

    Increased attention has been given to improving power system operation. This paper presents modeling, analysis and design of reactive shunt power compensators based on PWM-Voltage Source Inverters (Pulse Width Modulation -Voltage Source Inverters). (Pulse Width Modulation - Voltage Source Inverters). The control algorithm is based on new concepts of instantaneous active and reactive power theory. The objective is to show that with a small capacitor in the side of a 3-phase PWM-VSI it is possible to synthesize a variable reactive (capacitive or inductive) device. Design procedures and experimental results are presented. The feasibility of this method was verified by digital simulations and measurements on a small scale model. (author) 9 refs., 12 figs.

  8. Automatic voltage imbalance detector

    Science.gov (United States)

    Bobbett, Ronald E.; McCormick, J. Byron; Kerwin, William J.

    1984-01-01

    A device for indicating and preventing damage to voltage cells such as galvanic cells and fuel cells connected in series by detecting sequential voltages and comparing these voltages to adjacent voltage cells. The device is implemented by using operational amplifiers and switching circuitry is provided by transistors. The device can be utilized in battery powered electric vehicles to prevent galvanic cell damage and also in series connected fuel cells to prevent fuel cell damage.

  9. A simple algorithm for large-scale mapping of evergreen forests in tropical America, Africa and Asia

    Science.gov (United States)

    Xiangming Xiao; Chandrashekhar M. Biradar; Christina Czarnecki; Tunrayo Alabi; Michael Keller

    2009-01-01

    The areal extent and spatial distribution of evergreen forests in the tropical zones are important for the study of climate, carbon cycle and biodiversity. However, frequent cloud cover in the tropical regions makes mapping evergreen forests a challenging task. In this study we developed a simple and novel mapping algorithm that is based on the temporal profile...

  10. Sufficient Descent Polak-Ribière-Polyak Conjugate Gradient Algorithm for Large-Scale Box-Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2014-01-01

    descent method at first finite number of steps and then by conjugate gradient method subsequently. Under some appropriate conditions, we show that the algorithm converges globally. Numerical experiments and comparisons by using some box-constrained problems from CUTEr library are reported. Numerical comparisons illustrate that the proposed method is promising and competitive with the well-known method—L-BFGS-B.

  11. Voltage control on a train system

    Science.gov (United States)

    Gordon, Susanna P.; Evans, John A.

    2004-01-20

    The present invention provides methods for preventing low train voltages and managing interference, thereby improving the efficiency, reliability, and passenger comfort associated with commuter trains. An algorithm implementing neural network technology is used to predict low voltages before they occur. Once voltages are predicted, then multiple trains can be controlled to prevent low voltage events. Further, algorithms for managing inference are presented in the present invention. Different types of interference problems are addressed in the present invention such as "Interference During Acceleration", "Interference Near Station Stops", and "Interference During Delay Recovery." Managing such interference avoids unnecessary brake/acceleration cycles during acceleration, immediately before station stops, and after substantial delays. Algorithms are demonstrated to avoid oscillatory brake/acceleration cycles due to interference and to smooth the trajectories of closely following trains. This is achieved by maintaining sufficient following distances to avoid unnecessary braking/accelerating. These methods generate smooth train trajectories, making for a more comfortable ride, and improve train motor reliability by avoiding unnecessary mode-changes between propulsion and braking. These algorithms can also have a favorable impact on traction power system requirements and energy consumption.

  12. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  13. Adaptation of a fuzzy controller’s scaling gains using genetic algorithms for balancing an inverted pendulum

    Directory of Open Access Journals (Sweden)

    Duka Adrian-Vasile

    2011-12-01

    Full Text Available This paper examines the development of a genetic adaptive fuzzy control system for the Inverted Pendulum. The inverted pendulum is a classical problem in Control Engineering, used for testing different control algorithms. The goal is to balance the inverted pendulum in the upright position by controlling the horizontal force applied to its cart. Because it is unstable and has a complicated nonlinear dynamics, the inverted pendulum is a good testbed for the development of nonconventional advanced control techniques. Fuzzy logic technique has been successfully applied to control this type of system, however most of the time the design of the fuzzy controller is done in an ad-hoc manner, and choosing certain parameters (controller gains, membership functions proves difficult. This paper examines the implementation of an adaptive control method based on genetic algorithms (GA, which can be used on-line to produce the adaptation of the fuzzy controller’s gains in order to achieve the stabilization of the pendulum. The performances of the proposed control algorithms are evaluated and shown by means of digital simulation.

  14. Evaluation of Voltage Control Approaches for Future Smart Distribution Networks

    Directory of Open Access Journals (Sweden)

    Pengfei Wang

    2017-08-01

    Full Text Available This paper evaluates meta-heuristic and deterministic approaches for distribution network voltage control. As part of this evaluation, a novel meta-heuristic algorithm, Cuckoo Search, is applied for distribution network voltage control and compared with a deterministic voltage control algorithm, the oriented discrete coordinate decent method (ODCDM. ODCDM has been adopted in a state-of-the-art industrial product and applied in real distribution networks. These two algorithms have been evaluated under a set of test cases, which were generated to represent the voltage control problems in current and future distribution networks. Sampled test results have been presented, and findings have been discussed regarding the adoption of different optimization algorithms for current and future distribution networks.

  15. Vortex depinning as a nonequilibrium phase transition phenomenon: Scaling of current-voltage curves near the low and the high critical-current states in 2 H -Nb S2 single crystals

    Science.gov (United States)

    Bag, Biplab; Sivananda, Dibya J.; Mandal, Pabitra; Banerjee, S. S.; Sood, A. K.; Grover, A. K.

    2018-04-01

    The vortex depinning phenomenon in single crystals of 2 H -Nb S2 superconductors is used as a prototype for investigating properties of the nonequilibrium (NEQ) depinning phase transition. The 2 H -Nb S2 is a unique system as it exhibits two distinct depinning thresholds, viz., a lower critical current Icl and a higher one Ich. While Icl is related to depinning of a conventional, static (pinned) vortex state, the state with Ich is achieved via a negative differential resistance (NDR) transition where the velocity abruptly drops. Using a generalized finite-temperature scaling ansatz, we study the scaling of current (I)-voltage (V) curves measured across Icl and Ich. Our analysis shows that for I >Icl , the moving vortex state exhibits Arrhenius-like thermally activated flow behavior. This feature persists up to a current value where an inflexion in the IV curves is encountered. While past measurements have often reported similar inflexion, our analysis shows that the inflexion is a signature of a NEQ phase transformation from a thermally activated moving vortex phase to a free flowing phase. Beyond this inflection in IV, a large vortex velocity flow regime is encountered in the 2 H -Nb S2 system, wherein the Bardeen-Stephen flux flow limit is crossed. In this regime the NDR transition is encountered, leading to the high Ich state. The IV curves above Ich we show do not obey the generalized finite-temperature scaling ansatz (as obeyed near Icl). Instead, they scale according to the Fisher's scaling form [Fisher, Phys. Rev. B 31, 1396 (1985), 10.1103/PhysRevB.31.1396] where we show thermal fluctuations do not affect the vortex flow, unlike that found for depinning near Icl.

  16. Technological Aspects: High Voltage

    International Nuclear Information System (INIS)

    Faircloth, D C

    2013-01-01

    This paper covers the theory and technological aspects of high-voltage design for ion sources. Electric field strengths are critical to understanding high-voltage breakdown. The equations governing electric fields and the techniques to solve them are discussed. The fundamental physics of high-voltage breakdown and electrical discharges are outlined. Different types of electrical discharges are catalogued and their behaviour in environments ranging from air to vacuum are detailed. The importance of surfaces is discussed. The principles of designing electrodes and insulators are introduced. The use of high-voltage platforms and their relation to system design are discussed. The use of commercially available high-voltage technology such as connectors, feedthroughs and cables are considered. Different power supply technologies and their procurement are briefly outlined. High-voltage safety, electric shocks and system design rules are covered. (author)

  17. Technological Aspects: High Voltage

    CERN Document Server

    Faircloth, D.C.

    2013-12-16

    This paper covers the theory and technological aspects of high-voltage design for ion sources. Electric field strengths are critical to understanding high-voltage breakdown. The equations governing electric fields and the techniques to solve them are discussed. The fundamental physics of high-voltage breakdown and electrical discharges are outlined. Different types of electrical discharges are catalogued and their behaviour in environments ranging from air to vacuum are detailed. The importance of surfaces is discussed. The principles of designing electrodes and insulators are introduced. The use of high-voltage platforms and their relation to system design are discussed. The use of commercially available high-voltage technology such as connectors, feedthroughs and cables are considered. Different power supply technologies and their procurement are briefly outlined. High-voltage safety, electric shocks and system design rules are covered.

  18. Stray voltage mitigation

    Energy Technology Data Exchange (ETDEWEB)

    Jamali, B.; Piercy, R.; Dick, P. [Kinetrics Inc., Toronto, ON (Canada). Transmission and Distribution Technologies

    2008-04-09

    This report discussed issues related to farm stray voltage and evaluated mitigation strategies and costs for limiting voltage to farms. A 3-phase, 3-wire system with no neutral ground was used throughout North America before the 1930s. Transformers were connected phase to phase without any electrical connection between the primary and secondary sides of the transformers. Distribution voltage levels were then increased and multi-grounded neutral wires were added. The earth now forms a parallel return path for the neutral current that allows part of the neutral current to flow continuously through the earth. The arrangement is responsible for causing stray voltage. Stray voltage causes uneven milk production, increased incidences of mastitis, and can create a reluctance to drink water amongst cows when stray voltages are present. Off-farm sources of stray voltage include phase unbalances, undersized neutral wire, and high resistance splices on the neutral wire. Mitigation strategies for reducing stray voltage include phase balancing; conversion from single to 3-phase; increasing distribution voltage levels, and changing pole configurations. 22 refs., 5 tabs., 13 figs.

  19. High voltage engineering

    CERN Document Server

    Rizk, Farouk AM

    2014-01-01

    Inspired by a new revival of worldwide interest in extra-high-voltage (EHV) and ultra-high-voltage (UHV) transmission, High Voltage Engineering merges the latest research with the extensive experience of the best in the field to deliver a comprehensive treatment of electrical insulation systems for the next generation of utility engineers and electric power professionals. The book offers extensive coverage of the physical basis of high-voltage engineering, from insulation stress and strength to lightning attachment and protection and beyond. Presenting information critical to the design, selec

  20. High voltage test techniques

    CERN Document Server

    Kind, Dieter

    2001-01-01

    The second edition of High Voltage Test Techniques has been completely revised. The present revision takes into account the latest international developments in High Voltage and Measurement technology, making it an essential reference for engineers in the testing field.High Voltage Technology belongs to the traditional area of Electrical Engineering. However, this is not to say that the area has stood still. New insulating materials, computing methods and voltage levels repeatedly pose new problems or open up methods of solution; electromagnetic compatibility (EMC) or components and systems al

  1. An efficient, large-scale, non-lattice-detection algorithm for exhaustive structural auditing of biomedical ontologies.

    Science.gov (United States)

    Zhang, Guo-Qiang; Xing, Guangming; Cui, Licong

    2018-04-01

    One of the basic challenges in developing structural methods for systematic audition on the quality of biomedical ontologies is the computational cost usually involved in exhaustive sub-graph analysis. We introduce ANT-LCA, a new algorithm for computing all non-trivial lowest common ancestors (LCA) of each pair of concepts in the hierarchical order induced by an ontology. The computation of LCA is a fundamental step for non-lattice approach for ontology quality assurance. Distinct from existing approaches, ANT-LCA only computes LCAs for non-trivial pairs, those having at least one common ancestor. To skip all trivial pairs that may be of no practical interest, ANT-LCA employs a simple but innovative algorithmic strategy combining topological order and dynamic programming to keep track of non-trivial pairs. We provide correctness proofs and demonstrate a substantial reduction in computational time for two largest biomedical ontologies: SNOMED CT and Gene Ontology (GO). ANT-LCA achieved an average computation time of 30 and 3 sec per version for SNOMED CT and GO, respectively, about 2 orders of magnitude faster than the best known approaches. Our algorithm overcomes a fundamental computational barrier in sub-graph based structural analysis of large ontological systems. It enables the implementation of a new breed of structural auditing methods that not only identifies potential problematic areas, but also automatically suggests changes to fix the issues. Such structural auditing methods can lead to more effective tools supporting ontology quality assurance work. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. A quasi-static algorithm that includes effects of characteristic time scales for simulating failures in brittle materials

    KAUST Repository

    Liu, Jinxing

    2013-04-24

    When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).

  3. Robust and rapid algorithms facilitate large-scale whole genome sequencing downstream analysis in an integrative framework.

    Science.gov (United States)

    Li, Miaoxin; Li, Jiang; Li, Mulin Jun; Pan, Zhicheng; Hsu, Jacob Shujui; Liu, Dajiang J; Zhan, Xiaowei; Wang, Junwen; Song, Youqiang; Sham, Pak Chung

    2017-05-19

    Whole genome sequencing (WGS) is a promising strategy to unravel variants or genes responsible for human diseases and traits. However, there is a lack of robust platforms for a comprehensive downstream analysis. In the present study, we first proposed three novel algorithms, sequence gap-filled gene feature annotation, bit-block encoded genotypes and sectional fast access to text lines to address three fundamental problems. The three algorithms then formed the infrastructure of a robust parallel computing framework, KGGSeq, for integrating downstream analysis functions for whole genome sequencing data. KGGSeq has been equipped with a comprehensive set of analysis functions for quality control, filtration, annotation, pathogenic prediction and statistical tests. In the tests with whole genome sequencing data from 1000 Genomes Project, KGGSeq annotated several thousand more reliable non-synonymous variants than other widely used tools (e.g. ANNOVAR and SNPEff). It took only around half an hour on a small server with 10 CPUs to access genotypes of ∼60 million variants of 2504 subjects, while a popular alternative tool required around one day. KGGSeq's bit-block genotype format used 1.5% or less space to flexibly represent phased or unphased genotypes with multiple alleles and achieved a speed of over 1000 times faster to calculate genotypic correlation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Power Efficient Design of DisplayPort (7.0) Using Low-voltage differential signaling IO Standard Via UltraScale Field Programming Gate Arrays

    DEFF Research Database (Denmark)

    Das, Bhagwan; Abdullah, M.F.L.; Hussain, Dil muhammed Akbar

    2017-01-01

    Port (7.0) need be reduced. In this paper, a power efficient design for DisplayPort (7.0) is proposed using LVDS IO Standard. The proposed design is tested for different frequencies; 500 MHz, 700 MHz, 1.0 GHz, and 1.6 GHz. The design is implemented using vhdl in UltraScale FPGA. It is determined...... the designed vhdl based design of DisplayPort (7.0) can reduced 92% using LVDS IO Standard for all frequencies; 500 MHz, 700 MHz, 1.0 GHz, and 1.6 GHz, compared to vhdl based design of DisplayPort (7.0) without using IO Standard. The proposed design of vhdl based design of DisplayPort (7.0) using LVDS IO...... Standard offers no power consumption for DisplayPort (7.0) in standby mode. The vhdl based design of DisplayPort (7.0) using LVDS IO Standard will be helpful to process the high resolution video at low power consumption....

  5. Voltage linear transformation circuit design

    Science.gov (United States)

    Sanchez, Lucas R. W.; Jin, Moon-Seob; Scott, R. Phillip; Luder, Ryan J.; Hart, Michael

    2017-09-01

    Many engineering projects require automated control of analog voltages over a specified range. We have developed a computer interface comprising custom hardware and MATLAB code to provide real-time control of a Thorlabs adaptive optics (AO) kit. The hardware interface includes an op amp cascade to linearly shift and scale a voltage range. With easy modifications, any linear transformation can be accommodated. In AO applications, the design is suitable to drive a range of different types of deformable and fast steering mirrors (FSM's). Our original motivation and application was to control an Optics in Motion (OIM) FSM which requires the customer to devise a unique interface to supply voltages to the mirror controller to set the mirror's angular deflection. The FSM is in an optical servo loop with a wave front sensor (WFS), which controls the dynamic behavior of the mirror's deflection. The code acquires wavefront data from the WFS and fits a plane, which is subsequently converted into its corresponding angular deflection. The FSM provides +/-3° optical angular deflection for a +/-10 V voltage swing. Voltages are applied to the mirror via a National Instruments digital-to-analog converter (DAC) followed by an op amp cascade circuit. This system has been integrated into our Thorlabs AO testbed which currently runs at 11 Hz, but with planned software upgrades, the system update rate is expected to improve to 500 Hz. To show that the FSM subsystem is ready for this speed, we conducted two different PID tuning runs at different step commands. Once 500 Hz is achieved, we plan to make the code and method for our interface solution freely available to the community.

  6. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

    Science.gov (United States)

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-07-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

  7. Multilevel Evolutionary Algorithm that Optimizes the Structure of Scale-Free Networks for the Promotion of Cooperation in the Prisoner's Dilemma game.

    Science.gov (United States)

    Liu, Penghui; Liu, Jing

    2017-06-28

    Understanding the emergence of cooperation has long been a challenge across disciplines. Even if network reciprocity reflected the importance of population structure in promoting cooperation, it remains an open question how population structures can be optimized, thereby enhancing cooperation. In this paper, we attempt to apply the evolutionary algorithm (EA) to solve this highly complex problem. However, as it is hard to evaluate the fitness (cooperation level) of population structures, simply employing the canonical evolutionary algorithm (EA) may fail in optimization. Thus, we propose a new EA variant named mlEA-C PD -SFN to promote the cooperation level of scale-free networks (SFNs) in the Prisoner's Dilemma Game (PDG). Meanwhile, to verify the preceding conclusions may not be applied to this problem, we also provide the optimization results of the comparative experiment (EA cluster ), which optimizes the clustering coefficient of structures. Even if preceding research concluded that highly clustered scale-free networks enhance cooperation, we find EA cluster does not perform desirably, while mlEA-C PD -SFN performs efficiently in different optimization environments. We hope that mlEA-C PD -SFN may help promote the structure of species in nature and that more general properties that enhance cooperation can be learned from the output structures.

  8. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    Science.gov (United States)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  9. Optimized Placement of Wind Turbines in Large-Scale Offshore Wind Farm using Particle Swarm Optimization Algorithm

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Soltani, Mohsen

    2015-01-01

    With the increasing size of wind farm, the impact of the wake effect on wind farm energy yields become more and more evident. The arrangement of the wind turbines’ (WT) locations will influence the capital investment and contribute to the wake losses which incur the reduction of energy production....... As a consequence, the optimized placement of the wind turbines may be done by considering the wake effect as well as the components cost within the wind farm. In this paper, a mathematical model which includes the variation of both wind direction and wake deficit is proposed. The problem is formulated by using...... Levelized Production Cost (LPC) as the objective function. The optimization procedure is performed by Particle Swarm Optimization (PSO) algorithm with the purpose of maximizing the energy yields while minimizing the total investment. The simulation results indicate that the proposed method is effective...

  10. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    Science.gov (United States)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  11. An Improved Algorithm Based on Minimum Spanning Tree for Multi-scale Segmentation of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    LI Hui

    2015-07-01

    Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.

  12. based dynamic voltage restorer

    African Journals Online (AJOL)

    HOD

    operation due to presence of increased use of nonlinear loads (computers, microcontrollers ... simulations of a dynamic voltage restorer (DVR) was achieved using MATLAB/Simulink. ..... using Discrete PWM generator, then the IGBT inverter.

  13. Improved numerical algorithm and experimental validation of a system thermal-hydraulic/CFD coupling method for multi-scale transient simulations of pool-type reactors

    International Nuclear Information System (INIS)

    Toti, A.; Vierendeels, J.; Belloni, F.

    2017-01-01

    Highlights: • A system thermal-hydraulic/CFD coupling methodology is proposed for high-fidelity transient flow analyses. • The method is based on domain decomposition and implicit numerical scheme. • A novel interface Quasi-Newton algorithm is implemented to improve stability and convergence rate. • Preliminary validation analyses on the TALL-3D experiment. - Abstract: The paper describes the development and validation of a coupling methodology between the best-estimate system thermal-hydraulic code RELAP5-3D and the CFD code FLUENT, conceived for high fidelity plant-scale safety analyses of pool-type reactors. The computational tool is developed to assess the impact of three-dimensional phenomena occurring in accidental transients such as loss of flow (LOF) in the research reactor MYRRHA, currently in the design phase at the Belgian Nuclear Research Centre, SCK• CEN. A partitioned, implicit domain decomposition coupling algorithm is implemented, in which the coupled domains exchange thermal-hydraulics variables at coupling boundary interfaces. Numerical stability and interface convergence rates are improved by a novel interface Quasi-Newton algorithm, which is compared in this paper with previously tested numerical schemes. The developed computational method has been assessed for validation purposes against the experiment performed at the test facility TALL-3D, operated by the Royal Institute of Technology (KTH) in Sweden. This paper details the results of the simulation of a loss of forced convection test, showing the capability of the developed methodology to predict transients influenced by local three-dimensional phenomena.

  14. High voltage engineering fundamentals

    CERN Document Server

    Kuffel, E; Hammond, P

    1984-01-01

    Provides a comprehensive treatment of high voltage engineering fundamentals at the introductory and intermediate levels. It covers: techniques used for generation and measurement of high direct, alternating and surge voltages for general application in industrial testing and selected special examples found in basic research; analytical and numerical calculation of electrostatic fields in simple practical insulation system; basic ionisation and decay processes in gases and breakdown mechanisms of gaseous, liquid and solid dielectrics; partial discharges and modern discharge detectors; and over

  15. Low-voltage gyrotrons

    International Nuclear Information System (INIS)

    Glyavin, M. Yu.; Zavolskiy, N. A.; Sedov, A. S.; Nusinovich, G. S.

    2013-01-01

    For a long time, the gyrotrons were primarily developed for electron cyclotron heating and current drive of plasmas in controlled fusion reactors where a multi-megawatt, quasi-continuous millimeter-wave power is required. In addition to this important application, there are other applications (and their number increases with time) which do not require a very high power level, but such issues as the ability to operate at low voltages and have compact devices are very important. For example, gyrotrons are of interest for a dynamic nuclear polarization, which improves the sensitivity of the nuclear magnetic resonance spectroscopy. In this paper, some issues important for operation of gyrotrons driven by low-voltage electron beams are analyzed. An emphasis is made on the efficiency of low-voltage gyrotron operation at the fundamental and higher cyclotron harmonics. These efficiencies calculated with the account for ohmic losses were, first, determined in the framework of the generalized gyrotron theory based on the cold-cavity approximation. Then, more accurate, self-consistent calculations for the fundamental and second harmonic low-voltage sub-THz gyrotron designs were carried out. Results of these calculations are presented and discussed. It is shown that operation of the fundamental and second harmonic gyrotrons with noticeable efficiencies is possible even at voltages as low as 5–10 kV. Even the third harmonic gyrotrons can operate at voltages about 15 kV, albeit with rather low efficiency (1%–2% in the submillimeter wavelength region).

  16. Heuristic algorithms for joint optimization of unicast and anycast traffic in elastic optical network–based large–scale computing systems

    Directory of Open Access Journals (Sweden)

    Markowski Marcin

    2017-09-01

    Full Text Available In recent years elastic optical networks have been perceived as a prospective choice for future optical networks due to better adjustment and utilization of optical resources than is the case with traditional wavelength division multiplexing networks. In the paper we investigate the elastic architecture as the communication network for distributed data centers. We address the problems of optimization of routing and spectrum assignment for large-scale computing systems based on an elastic optical architecture; particularly, we concentrate on anycast user to data center traffic optimization. We assume that computational resources of data centers are limited. For this offline problems we formulate the integer linear programming model and propose a few heuristics, including a meta-heuristic algorithm based on a tabu search method. We report computational results, presenting the quality of approximate solutions and efficiency of the proposed heuristics, and we also analyze and compare some data center allocation scenarios.

  17. A novel image fusion algorithm based on 2D scale-mixing complex wavelet transform and Bayesian MAP estimation for multimodal medical images

    Directory of Open Access Journals (Sweden)

    Abdallah Bengueddoudj

    2017-05-01

    Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.

  18. A new approach to voltage sag detection based on wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Gencer, Oezguer; Oeztuerk, Semra; Erfidan, Tarik [Kocaeli University, Faculty of Engineering, Department of Electrical Engineering, Veziroglu Kampuesue, Eski Goelcuek Yolu, Kocaeli (Turkey)

    2010-02-15

    In this work, a new voltage sag detection method based on wavelet transform is developed. Voltage sag detection algorithms, so far have proved their efficiency and computational ability. Using several windowing techniques take long computational times for disturbance detection. Also researchers have been working on separating voltage sags from other voltage disturbances for the last decade. Due to increasing power quality standards new high performance disturbance detection algorithms are necessary to obtain high power quality standards. For this purpose, the wavelet technique is used for detecting voltage sag duration and magnitude. The developed voltage sag detection algorithm is implemented with high speed microcontroller. Test results show that, the new approach provides very accurate and satisfactory voltage sag detection. (author)

  19. Device for monitoring cell voltage

    Science.gov (United States)

    Doepke, Matthias [Garbsen, DE; Eisermann, Henning [Edermissen, DE

    2012-08-21

    A device for monitoring a rechargeable battery having a number of electrically connected cells includes at least one current interruption switch for interrupting current flowing through at least one associated cell and a plurality of monitoring units for detecting cell voltage. Each monitoring unit is associated with a single cell and includes a reference voltage unit for producing a defined reference threshold voltage and a voltage comparison unit for comparing the reference threshold voltage with a partial cell voltage of the associated cell. The reference voltage unit is electrically supplied from the cell voltage of the associated cell. The voltage comparison unit is coupled to the at least one current interruption switch for interrupting the current of at least the current flowing through the associated cell, with a defined minimum difference between the reference threshold voltage and the partial cell voltage.

  20. Active Power Filter DC Bus Voltage Piecewise Reaching Law Variable Structure Control

    OpenAIRE

    Liu, Baolian; Ding, Zujun; Zhao, Huanyu; Jin, Defei

    2014-01-01

    The DC bus voltage stability control is one key technology to ensure that Active Power Filter (APF) operates stably. The external disturbances such as power grid and load fluctuation and the system parameters changing may affect the stability of APF DC bus voltage and the normal operation of APF. The mathematical model of DC bus voltage is established according to power balance principle and a DC bus voltage piecewise reaching law variable structure control algorithm is proposed to solve the ...

  1. ICT Based HIL Validation of Voltage Control Coordination in Smart Grids Scenarios

    Directory of Open Access Journals (Sweden)

    Kamal Shahid

    2018-05-01

    Full Text Available This paper aims to validate the capability of renewable generation (ReGen plants to provide online voltage control coordination ancillary service to the system operators in smart grids. Simulation studies about online coordination concepts from ReGen plants have already been identified in previous publications. However, here, the results are validated through a real-time Hardware-In-the-Loop framework using an exemplary benchmark grid area in Denmark as a base case that includes flexible renewable power plants providing voltage control functionality. The provision of voltage control support from ReGen plants is verified on a large-scale power system against the baseline scenario, considering the hierarchical industrial controller platforms used nowadays in power plants. Moreover, the verification of online voltage control support is carried out by taking into account a communication network as well as the associated data traffic patterns obtained from a real network. Based on the sets of recordings, guidelines and recommendations for practical implementation of the developed control algorithms for targeted ancillary service are made. This provides a deep insight for stakeholders, i.e., wind turbine and photo-voltaic system manufacturers and system operators, regarding the existing boundaries for current technologies and requirements for accommodating the new ancillary services in industrial application.

  2. High frequency breakdown voltage

    International Nuclear Information System (INIS)

    Chu, Thanh Duy.

    1992-03-01

    This report contains information about the effect of frequency on the breakdown voltage of an air gap at standard pressure and temperature, 76 mm Hg and O degrees C, respectively. The frequencies of interest are 47 MHz and 60 MHz. Additionally, the breakdown in vacuum is briefly considered. The breakdown mechanism is explained on the basis of collision and ionization. The presence of the positive ions produced by ionization enhances the field in the gap, and thus determines the breakdown. When a low-frequency voltage is applied across the gap, the breakdown mechanism is the same as that caused by the DC or static voltage. However, when the frequency exceeds the first critical value f c , the positive ions are trapped in the gap, increasing the field considerably. This makes the breakdown occur earlier; in other words, the breakdown voltage is lowered. As the frequency increases two decades or more, the second critical frequency, f ce , is reached. This time the electrons start being trapped in the gap. Those electrons that travel multiple times across the gap before reaching the positive electrode result in an enormous number of electrons and positive ions being present in the gap. The result is a further decrease of the breakdown voltage. However, increasing the frequency does not decrease the breakdown voltage correspondingly. In fact, the associated breakdown field intensity is almost constant (about 29 kV/cm).The reason is that the recombination rate increases and counterbalances the production rate, thus reducing the effect of the positive ions' concentration in the gap. The theory of collision and ionization does not apply to the breakdown in vacuum. It seems that the breakdown in vacuum is primarily determined by the irregularities on the surfaces of the electrodes. Therefore, the effect of frequency on the breakdown, if any, is of secondary importance

  3. Experimental validation of prototype high voltage bushing

    Science.gov (United States)

    Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.

    2017-08-01

    Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.

  4. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  5. Characterization of trabecular bone plate-rod microarchitecture using multirow detector CT and the tensor scale: Algorithms, validation, and applications to pilot human studies

    Science.gov (United States)

    Saha, Punam K.; Liu, Yinxiao; Chen, Cheng; Jin, Dakai; Letuchy, Elena M.; Xu, Ziyue; Amelon, Ryan E.; Burns, Trudy L.; Torner, James C.; Levy, Steven M.; Calarge, Chadi A.

    2015-01-01

    Purpose: Osteoporosis is a common bone disease associated with increased risk of low-trauma fractures leading to substantial morbidity, mortality, and financial costs. Clinically, osteoporosis is defined by low bone mineral density (BMD); however, increasing evidence suggests that trabecular bone (TB) microarchitectural quality is an important determinant of bone strength and fracture risk. A tensor scale based algorithm for in vivo characterization of TB plate-rod microarchitecture at the distal tibia using multirow detector CT (MD-CT) imaging is presented and its performance and applications are examined. Methods: The tensor scale characterizes individual TB on the continuum between a perfect plate and a perfect rod and computes their orientation using optimal ellipsoidal representation of local structures. The accuracy of the method was evaluated using computer-generated phantom images at a resolution and signal-to-noise ratio achievable in vivo. The robustness of the method was examined in terms of stability across a wide range of voxel sizes, repeat scan reproducibility, and correlation between TB measures derived by imaging human ankle specimens under ex vivo and in vivo conditions. Finally, the application of the method was evaluated in pilot human studies involving healthy young-adult volunteers (age: 19 to 21 yr; 51 females and 46 males) and patients treated with selective serotonin reuptake inhibitors (SSRIs) (age: 19 to 21 yr; six males and six females). Results: An error of (3.2% ± 2.0%) (mean ± SD), computed as deviation from known measures of TB plate-width, was observed for computer-generated phantoms. An intraclass correlation coefficient of 0.95 was observed for tensor scale TB measures in repeat MD-CT scans where the measures were averaged over a small volume of interest of 1.05 mm diameter with limited smoothing effects. The method was found to be highly stable at different voxel sizes with an error of (2.29% ± 1.56%) at an in vivo voxel size

  6. Analysis of Voltage Forming Methods for Multiphase Inverters

    Directory of Open Access Journals (Sweden)

    Tadas Lipinskis

    2013-05-01

    Full Text Available The article discusses advantages of the multiphase AC induction motor over three or less phase motors. It presents possible stator winding configurations for a multiphase induction motor. Various fault control strategies were reviewed for phases feeding the motor. The authors propose a method for quality evaluation of voltage forming algorithm in the inverter. Simulation of a six-phase voltage source inverter, voltage in which is formed using a simple SPWM control algorithm, was performed in Matlab Simulink. Simulation results were evaluated using the proposed method. Inverter’s power stage was powered by 400 V DC source. The spectrum of output currents was analysed and the magnitude of the main frequency component was at least 12 times greater than the next biggest-magnitude component. The value of rectified inverter voltage was 373 V.Article in Lithuanian

  7. Digital voltage discriminator

    International Nuclear Information System (INIS)

    Zhou Zhicheng

    1992-01-01

    A digital voltage discriminator is described, which is synthesized by digital comparator and ADC. The threshold is program controllable with high stability. Digital region of confusion is approximately equal to 1.5 LSB. This discriminator has a single channel analyzer function model with channel width of 1.5 LSB

  8. High-voltage picoamperemeter

    Energy Technology Data Exchange (ETDEWEB)

    Bugl, Andrea; Ball, Markus; Boehmer, Michael; Doerheim, Sverre; Hoenle, Andreas; Konorov, Igor [Technische Universitaet Muenchen, Garching (Germany); Ketzer, Bernhard [Technische Universitaet Muenchen, Garching (Germany); Helmholtz-Institut fuer Strahlen- und Kernphysik, Bonn (Germany)

    2014-07-01

    Current measurements in the nano- and picoampere region on high voltage are an important tool to understand charge transfer processes in micropattern gas detectors like the Gas Electron Multiplier (GEM). They are currently used to e.g. optimize the field configuration in a multi-GEM stack to be used in the ALICE TPC after the upgrade of the experiment during the 2nd long shutdown of the LHC. Devices which allow measurements down to 1pA at high voltage up to 6 kV have been developed at TU Muenchen. They are based on analog current measurements via the voltage drop over a switchable shunt. A microcontroller collects 128 digital ADC values and calculates their mean and standard deviation. This information is sent with a wireless transmitting unit to a computer and stored in a root file. A nearly unlimited number of devices can be operated simultaneously and read out by a single receiver. The results can also be displayed on a LCD directly at the device. Battery operation and the wireless readout are important to protect the user from any contact to high voltage. The principle of the device is explained, and systematic studies of their properties are shown.

  9. Geomagnetism and Induced Voltage

    Science.gov (United States)

    Abdul-Razzaq, W.; Biller, R. D.

    2010-01-01

    Introductory physics laboratories have seen an influx of "conceptual integrated science" over time in their classrooms with elements of other sciences such as chemistry, biology, Earth science, and astronomy. We describe a laboratory to introduce this development, as it attracts attention to the voltage induced in the human brain as it…

  10. Mitigation of Unbalanced Voltage Sags and Voltage Unbalance in CIGRE Low Voltage Distribution Network

    DEFF Research Database (Denmark)

    Mustafa, Ghullam; Bak-Jensen, Birgitte; Mahat, Pukar

    2013-01-01

    Any problem with voltage in a power network is undesirable as it aggravates the quality of the power. Power electronic devices such as Voltage Source Converter (VSC) based Static Synchronous Compensator (STATCOM) etc. can be used to mitigate the voltage problems in the distribution system...... to unbalanced faults. The compensation of unbalanced voltage sags and voltage unbalance in the CIGRE distribution network is done by using the four STATCOM compensators already existing in the test grid. The simulations are carried out in DIgSILENT power factory software version 15.0........ The voltage problems dealt with in this paper are to show how to mitigate unbalanced voltage sags and voltage unbalance in the CIGRE Low Voltage (LV) test network and net-works like this. The voltage unbalances, for the tested cases in the CIGRE LV test network are mainly due to single phase loads and due...

  11. Mitigation of Voltage Sags in CIGRE Low Voltage Distribution Network

    DEFF Research Database (Denmark)

    Mustafa, Ghullam; Bak-Jensen, Birgitte; Mahat, Pukar

    2013-01-01

    Any problem in voltage in a power network is undesirable as it aggravates the quality of the power. Power electronic devices such as Voltage Source Converter (VSC) based Static Synchronous Compensator (STATCOM), Dynamic Voltage Restorer (DVR) etc. are commonly used for the mitigation of voltage p....... The compensation of voltage sags in the different parts of CIGRE distribution network is done by using the four STATCOM compensators already existing in the test grid. The simulations are carried out in DIgSILENT power factory software version 15.0.......Any problem in voltage in a power network is undesirable as it aggravates the quality of the power. Power electronic devices such as Voltage Source Converter (VSC) based Static Synchronous Compensator (STATCOM), Dynamic Voltage Restorer (DVR) etc. are commonly used for the mitigation of voltage...... problems in the distribution system. The voltage problems dealt with in this paper are to show how to mitigate voltage sags in the CIGRE Low Voltage (LV) test network and networks like this. The voltage sags, for the tested cases in the CIGRE LV test network are mainly due to three phase faults...

  12. Mitigating voltage lead errors of an AC Josephson voltage standard by impedance matching

    Science.gov (United States)

    Zhao, Dongsheng; van den Brom, Helko E.; Houtzager, Ernest

    2017-09-01

    A pulse-driven AC Josephson voltage standard (ACJVS) generates calculable AC voltage signals at low temperatures, whereas measurements are performed with a device under test (DUT) at room temperature. The voltage leads cause the output voltage to show deviations that scale with the frequency squared. Error correction mechanisms investigated so far allow the ACJVS to be operational for frequencies up to 100 kHz. In this paper, calculations are presented to deal with these errors in terms of reflected waves. Impedance matching at the source side of the system, which is loaded with a high-impedance DUT, is proposed as an accurate method to mitigate these errors for frequencies up to 1 MHz. Simulations show that the influence of non-ideal component characteristics, such as the tolerance of the matching resistor, the capacitance of the load input impedance, losses in the voltage leads, non-homogeneity in the voltage leads, a non-ideal on-chip connection and inductors between the Josephson junction array and the voltage leads, can be corrected for using the proposed procedures. The results show that an expanded uncertainty of 12 parts in 106 (k  =  2) at 1 MHz and 0.5 part in 106 (k  =  2) at 100 kHz is within reach.

  13. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    Directory of Open Access Journals (Sweden)

    Franz Konstantin Fuss

    2013-01-01

    Full Text Available Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal’s time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  14. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    Science.gov (United States)

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  15. Large-Scale Mapping of Carbon Stocks in Riparian Forests with Self-Organizing Maps and the k-Nearest-Neighbor Algorithm

    Directory of Open Access Journals (Sweden)

    Leonhard Suchenwirth

    2014-07-01

    Full Text Available Among the machine learning tools being used in recent years for environmental applications such as forestry, self-organizing maps (SOM and the k-nearest neighbor (kNN algorithm have been used successfully. We applied both methods for the mapping of organic carbon (Corg in riparian forests due to their considerably high carbon storage capacity. Despite the importance of floodplains for carbon sequestration, a sufficient scientific foundation for creating large-scale maps showing the spatial Corg distribution is still missing. We estimated organic carbon in a test site in the Danube Floodplain based on RapidEye remote sensing data and additional geodata. Accordingly, carbon distribution maps of vegetation, soil, and total Corg stocks were derived. Results were compared and statistically evaluated with terrestrial survey data for outcomes with pure remote sensing data and for the combination with additional geodata using bias and the Root Mean Square Error (RMSE. Results show that SOM and kNN approaches enable us to reproduce spatial patterns of riparian forest Corg stocks. While vegetation Corg has very high RMSEs, outcomes for soil and total Corg stocks are less biased with a lower RMSE, especially when remote sensing and additional geodata are conjointly applied. SOMs show similar percentages of RMSE to kNN estimations.

  16. Synchronised Voltage Space Vector Modulation for Three-level Inverters with Common-mode Voltage Elimination

    DEFF Research Database (Denmark)

    Oleschuk, Valentin; Blaabjerg, Frede

    2002-01-01

    A novel method of direct synchronous pulse-width modulation (PWM) is disseminated to three-level voltage source inverters with control algorithms with elimination of the common-mode voltages in three-phase drive systems with PWM. It provides smooth pulses-ratio changing and a quarter-wave symmetry...... of the voltage waveforms during the whole control range including overmodulation. Continuous, discontinuous and "direct-direct" schemes of synchronous PWM with both algebraic and trigonometric control functions have been analysed and compared. Simulations give the behaviour of the proposed methods and show some...... advantages of synchronous PWM in comparison with asynchronous at low ratios between the switching frequency and fundamental frequency....

  17. Evaluation of the Voltage Support Strategies for the Low Voltage Grid Connected PV

    DEFF Research Database (Denmark)

    Demirok, Erhan; Sera, Dezso; Teodorescu, Remus

    2010-01-01

    Admissible range of grid voltage is one of the strictest constraints for the penetration of distributed photovoltaic (PV) generators especially connection to low voltage (LV) public networks. Voltage limits are usually fulfilled either by network reinforcements or limiting of power injections from...... PVs. In order to increase PV penetration level further, new voltage support control functions for individual inverters are required. This paper investigates distributed reactive power regulation and active power curtailment strategies regarding the development of PV connection capacity by evaluation...... of reactive power efforts and requirement of minimum active power curtailment. Furthermore, a small scale experimental setup is built to reflect real grid interaction in the laboratory by achieving critical types of grid (weak and sufficiently stiff)....

  18. Coplanar strips for Josephson voltage standard circuits

    International Nuclear Information System (INIS)

    Schubert, M.; May, T.; Wende, G.; Fritzsch, L.; Meyer, H.-G.

    2001-01-01

    We present a microwave circuit for Josephson voltage standards. Here, the Josephson junctions are integrated in a microwave transmission line designed as coplanar strips (CPS). The new layout offers the possibility of achieving a higher scale of integration and to considerably simplify the fabrication technology. The characteristic impedance of the CPS is about 50 Ω, and this should be of interest for programmable Josephson voltage standard circuits with SNS or SINIS junctions. To demonstrate the function of the microwave circuit design, conventional 10 V Josephson voltage standard circuits with 17000 Nb/AlO x /Nb junctions were prepared and tested. Stable Shapiro steps at the 10 V level were generated. Furthermore, arrays of 1400 SINIS junctions in this microwave layout exhibited first-order Shapiro steps. Copyright 2001 American Institute of Physics

  19. High Voltage Charge Pump

    KAUST Repository

    Emira, Ahmed A.; Abdelghany, Mohamed A.; Elsayed, Mohannad Yomn; Elshurafa, Amro M; Salama, Khaled N.

    2014-01-01

    Various embodiments of a high voltage charge pump are described. One embodiment is a charge pump circuit that comprises a plurality of switching stages each including a clock input, a clock input inverse, a clock output, and a clock output inverse. The circuit further comprises a plurality of pumping capacitors, wherein one or more pumping capacitors are coupled to a corresponding switching stage. The circuit also comprises a maximum selection circuit coupled to a last switching stage among the plurality of switching stages, the maximum selection circuit configured to filter noise on the output clock and the output clock inverse of the last switching stage, the maximum selection circuit further configured to generate a DC output voltage based on the output clock and the output clock inverse of the last switching stage.

  20. High Voltage Charge Pump

    KAUST Repository

    Emira, Ahmed A.

    2014-10-09

    Various embodiments of a high voltage charge pump are described. One embodiment is a charge pump circuit that comprises a plurality of switching stages each including a clock input, a clock input inverse, a clock output, and a clock output inverse. The circuit further comprises a plurality of pumping capacitors, wherein one or more pumping capacitors are coupled to a corresponding switching stage. The circuit also comprises a maximum selection circuit coupled to a last switching stage among the plurality of switching stages, the maximum selection circuit configured to filter noise on the output clock and the output clock inverse of the last switching stage, the maximum selection circuit further configured to generate a DC output voltage based on the output clock and the output clock inverse of the last switching stage.

  1. High Voltage Seismic Generator

    Science.gov (United States)

    Bogacz, Adrian; Pala, Damian; Knafel, Marcin

    2015-04-01

    This contribution describes the preliminary result of annual cooperation of three student research groups from AGH UST in Krakow, Poland. The aim of this cooperation was to develop and construct a high voltage seismic wave generator. Constructed device uses a high-energy electrical discharge to generate seismic wave in ground. This type of device can be applied in several different methods of seismic measurement, but because of its limited power it is mainly dedicated for engineering geophysics. The source operates on a basic physical principles. The energy is stored in capacitor bank, which is charged by two stage low to high voltage converter. Stored energy is then released in very short time through high voltage thyristor in spark gap. The whole appliance is powered from li-ion battery and controlled by ATmega microcontroller. It is possible to construct larger and more powerful device. In this contribution the structure of device with technical specifications is resented. As a part of the investigation the prototype was built and series of experiments conducted. System parameter was measured, on this basis specification of elements for the final device were chosen. First stage of the project was successful. It was possible to efficiently generate seismic waves with constructed device. Then the field test was conducted. Spark gap wasplaced in shallowborehole(0.5 m) filled with salt water. Geophones were placed on the ground in straight line. The comparison of signal registered with hammer source and sparker source was made. The results of the test measurements are presented and discussed. Analysis of the collected data shows that characteristic of generated seismic signal is very promising, thus confirms possibility of practical application of the new high voltage generator. The biggest advantage of presented device after signal characteristics is its size which is 0.5 x 0.25 x 0.2 m and weight approximately 7 kg. This features with small li-ion battery makes

  2. Increased voltage photovoltaic cell

    Science.gov (United States)

    Ross, B.; Bickler, D. B.; Gallagher, B. D. (Inventor)

    1985-01-01

    A photovoltaic cell, such as a solar cell, is provided which has a higher output voltage than prior cells. The improved cell includes a substrate of doped silicon, a first layer of silicon disposed on the substrate and having opposite doping, and a second layer of silicon carbide disposed on the first layer. The silicon carbide preferably has the same type of doping as the first layer.

  3. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  4. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  5. Suppressing voltage transients in high voltage power supplies

    International Nuclear Information System (INIS)

    Lickel, K.F.; Stonebank, R.

    1979-01-01

    A high voltage power supply for an X-ray tubes includes voltage adjusting means, a high voltage transformer, switch means connected to make and interrupt the primary current of the transformer, and over-voltage suppression means to suppress the voltage transient produced when the current is switched on. In order to reduce the power losses in the suppression means, an impedance is connected in the transformer primary circuit on operation of the switch means and is subsequently short-circuited by a switch controlled by a timer after a period which is automatically adjusted to the duration of the transient overvoltage. (U.K.)

  6. Benchmarking of Voltage Sag Generators

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede; Zou, Zhixiang

    2012-01-01

    The increased penetration of renewable energy systems, like photovoltaic and wind power systems, rises the concern about the power quality and stability of the utility grid. Some regulations for Low Voltage Ride-Through (LVRT) for medium voltage or high voltage applications, are coming into force...

  7. Charge-pump voltage converter

    Science.gov (United States)

    Brainard, John P [Albuquerque, NM; Christenson, Todd R [Albuquerque, NM

    2009-11-03

    A charge-pump voltage converter for converting a low voltage provided by a low-voltage source to a higher voltage. Charge is inductively generated on a transfer rotor electrode during its transit past an inductor stator electrode and subsequently transferred by the rotating rotor to a collector stator electrode for storage or use. Repetition of the charge transfer process leads to a build-up of voltage on a charge-receiving device. Connection of multiple charge-pump voltage converters in series can generate higher voltages, and connection of multiple charge-pump voltage converters in parallel can generate higher currents. Microelectromechanical (MEMS) embodiments of this invention provide a small and compact high-voltage (several hundred V) voltage source starting with a few-V initial voltage source. The microscale size of many embodiments of this invention make it ideally suited for MEMS- and other micro-applications where integration of the voltage or charge source in a small package is highly desirable.

  8. Transient voltage sharing in series-coupled high voltage switches

    Directory of Open Access Journals (Sweden)

    Editorial Office

    1992-07-01

    Full Text Available For switching voltages in excess of the maximum blocking voltage of a switching element (for example, thyristor, MOSFET or bipolar transistor such elements are often coupled in series - and additional circuitry has to be provided to ensure equal voltage sharing. Between each such series element and system ground there is a certain parasitic capacitance that may draw a significant current during high-speed voltage transients. The "open" switch is modelled as a ladder network. Analy­sis reveals an exponential progression in the distribution of the applied voltage across the elements. Overstressing thus oc­curs in some of the elements at levels of the total voltage that are significantly below the design value. This difficulty is overcome by grading the voltage sharing circuitry, coupled in parallel with each element, in a prescribed manner, as set out here.

  9. Coordinated Voltage Control of Distributed PV Inverters for Voltage Regulation in Low Voltage Distribution Networks

    DEFF Research Database (Denmark)

    Nainar, Karthikeyan; Pokhrel, Basanta Raj; Pillai, Jayakrishnan Radhakrishna

    2017-01-01

    This paper reviews and analyzes the existing voltage control methods of distributed solar PV inverters to improve the voltage regulation and thereby the hosting capacity of a low-voltage distribution network. A novel coordinated voltage control method is proposed based on voltage sensitivity...... optimization. The proposed method is used to calculate the voltage bands and droop settings of PV inverters at each node by the supervisory controller. The local controller of each PV inverter implements the volt/var control and if necessary, the active power curtailment as per the received settings and based...... on measured local voltages. The advantage of the proposed method is that the calculated reactive power and active power droop settings enable fair contribution of the PV inverters at each node to the voltage regulation. Simulation studies are conducted using DigSilent Power factory software on a simplified...

  10. Fine-scale estimation of carbon monoxide and fine particulate matter concentrations in proximity to a road intersection by using wavelet neural network with genetic algorithm

    Science.gov (United States)

    Wang, Zhanyong; Lu, Feng; He, Hong-di; Lu, Qing-Chang; Wang, Dongsheng; Peng, Zhong-Ren

    2015-03-01

    At road intersections, vehicles frequently stop with idling engines during the red-light period and speed up rapidly in the green-light period, which generates higher velocity fluctuation and thus higher emission rates. Additionally, the frequent changes of wind direction further add the highly variable dispersion of pollutants at the street scale. It is, therefore, very difficult to estimate the distribution of pollutant concentrations using conventional deterministic causal models. For this reason, a hybrid model combining wavelet neural network and genetic algorithm (GA-WNN) is proposed for predicting 5-min series of carbon monoxide (CO) and fine particulate matter (PM2.5) concentrations in proximity to an intersection. The proposed model is examined based on the measured data under two situations. As the measured pollutant concentrations are found to be dependent on the distance to the intersection, the model is evaluated in three locations respectively, i.e. 110 m, 330 m and 500 m. Due to the different variation of pollutant concentrations on varied time, the model is also evaluated in peak and off-peak traffic time periods separately. Additionally, the proposed model, together with the back-propagation neural network (BPNN), is examined with the measured data in these situations. The proposed model is found to perform better in predictability and precision for both CO and PM2.5 than BPNN does, implying that the hybrid model can be an effective tool to improve the accuracy of estimating pollutants' distribution pattern at intersections. The outputs of these findings demonstrate the potential of the proposed model to be applicable to forecast the distribution pattern of air pollution in real-time in proximity to road intersection.

  11. Sensing voltage across lipid membranes

    Science.gov (United States)

    Swartz, Kenton J.

    2009-01-01

    The detection of electrical potentials across lipid bilayers by specialized membrane proteins is required for many fundamental cellular processes such as the generation and propagation of nerve impulses. These membrane proteins possess modular voltage-sensing domains, a notable example being the S1-S4 domains of voltage-activated ion channels. Ground-breaking structural studies on these domains explain how voltage sensors are designed and reveal important interactions with the surrounding lipid membrane. Although further structures are needed to fully understand the conformational changes that occur during voltage sensing, the available data help to frame several key concepts that are fundamental to the mechanism of voltage sensing. PMID:19092925

  12. A novel single-phase phase space-based voltage mode controller for distributed static compensator to improve voltage profile of distribution systems

    International Nuclear Information System (INIS)

    Shokri, Abdollah; Shareef, Hussain; Mohamed, Azah; Farhoodnea, Masoud; Zayandehroodi, Hadi

    2014-01-01

    Highlights: • A new phase space based voltage mode controller for D-STATCOM was proposed. • The proposed compensator was tested to mitigate voltage disturbances in distribution systems. • Voltage fluctuation, voltage sag and voltage swell are considered to evaluate the performance of the proposed compensator. - Abstract: Distribution static synchronous compensator (D-STATCOM) has been developed and attained a great interest to compensate the power quality disturbances of distribution systems. In this paper, a novel single-phase control scheme for D-STATCOM is proposed to improve voltage profile at the Point of Common Coupling (PCC). The proposed voltage mode (VM) controller is based on the phase space algorithm, which is able to rapidly detect and mitigate any voltage deviations from reference voltage including voltage sags and voltage swells. To investigate the efficiency and accuracy of the proposed compensator, a system is modeled using Matlab/Simulink. The simulation results approve the capability of the proposed VM controller to provide a regulated and disturbance-free voltage for the connected loads at the PCC

  13. Extended SVM algorithms for multilevel trans-Z-source inverter

    Directory of Open Access Journals (Sweden)

    Aida Baghbany Oskouei

    2016-03-01

    Full Text Available This paper suggests extended algorithms for multilevel trans-Z-source inverter. These algorithms are based on space vector modulation (SVM, which works with high switching frequency and does not generate the mean value of the desired load voltage in every switching interval. In this topology the output voltage is not limited to dc voltage source similar to traditional cascaded multilevel inverter and can be increased with trans-Z-network shoot-through state control. Besides, it is more reliable against short circuit, and due to several number of dc sources in each phase of this topology, it is possible to use it in hybrid renewable energy. Proposed SVM algorithms include the following: Combined modulation algorithm (SVPWM and shoot-through implementation in dwell times of voltage vectors algorithm. These algorithms are compared from viewpoint of simplicity, accuracy, number of switching, and THD. Simulation and experimental results are presented to demonstrate the expected representations.

  14. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  15. Heat-pump performance: voltage dip/sag, under-voltage and over-voltage

    Directory of Open Access Journals (Sweden)

    William J.B. Heffernan

    2014-12-01

    Full Text Available Reverse cycle air-source heat-pumps are an increasingly significant load in New Zealand and in many other countries. This has raised concern over the impact wide-spread use of heat-pumps may have on the grid. The characteristics of the loads connected to the power system are changing because of heat-pumps. Their performance during under-voltage events such as voltage dips has the potential to compound the event and possibly cause voltage collapse. In this study, results from testing six heat-pumps are presented to assess their performance at various voltages and hence their impact on voltage stability.

  16. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  17. Synchrophasor-Based Online Coherency Identification in Voltage Stability Assessment

    Directory of Open Access Journals (Sweden)

    ADEWOLE, A. C.

    2015-11-01

    Full Text Available This paper presents and investigates a new measurement-based approach in the identification of coherent groups in load buses and synchronous generators for voltage stability assessment application in large interconnected power systems. A hybrid Calinski-Harabasz criterion and k-means clustering algorithm is developed for the determination of the cluster groups in the system. The proposed method is successfully validated by using the New England 39-bus test system. Also, the performance of the voltage stability assessment algorithm using wide area synchrophasor measurements from the key synchronous generator in each respective cluster was tested online for the prediction of the system's margin to voltage collapse using a testbed comprising of a Programmable Logic Controller (PLC in a hardware-in-the-loop configuration with the Real-Time Digital Simulator (RTDS and Phasor Measurement Units (PMUs.

  18. Exploration of dual supply voltage logic synthesis in state-of-the-art ASIC design flows

    Directory of Open Access Journals (Sweden)

    T. Mahnke

    2003-01-01

    Full Text Available Dual supply voltage scaling (DSVS for logiclevel power optimization at the has increasingly attracted attention over the last few years. However, mainly due to the fact that the most widely used design tools do not support this new technique, it has still not become an integral part of real-world design flows. In this paper, a novel logic synthesis methodology that enables DSVS while relying entirely on standard tools is presented. The key to this methodology is a suitably modeled dual supply voltage (DSV standard cell library. A basic evaluation of the methodology has been carried out on a number of MCNC benchmark circuits. In all these experiments, the results of state-of-the-art powerdriven single supply voltage (SSV logic synthesis have been used as references in order to determine the true additional benefit of DSVS. Compared with the results of SSV power optimization, additional power reductions of 10% on average have been achieved. The results prove the feasibility of the new approach and reveal its greater efficiency in comparison with a well-known dedicated DSVS algorithm. Finally, the methodology has been applied to an embedded microcontroller core in order to further explore the potentials and limitations of DSVS in an existing industrial design environment.

  19. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  20. Improvement of Voltage Stability in Electrical Network by Using a STATCOM

    Directory of Open Access Journals (Sweden)

    Kamel MERINI

    2014-02-01

    Full Text Available This paper aims to clarify power flow without and with static synchronous compensator (STATCOM and searching the best location of STATCOM to improve voltage in the Algerian network. In daily operation where there are all kinds of disturbances such as voltage fluctuations, voltage sags, swells, voltage unbalances and harmonics. STATCOM is modeled as a controllable voltage source. To validate the effectiveness of the Newton-Raphson method algorithm was implemented to solve power flow equations in presence of STATCOM. Case studies are carried out on 59-bus Algerian network test to demonstrate the performance of proposed models. Simulation results show the effectiveness and capability of STATCOM in improving voltage regulation in transmission systems; moreover the power solution using the Newton-Raphson algorithm developed. The STATCOM and the detailed simulation are performed using Matlab program.

  1. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  2. DiSC: A Simulation Framework for Distribution System Voltage Control

    DEFF Research Database (Denmark)

    Pedersen, Rasmus; Sloth, Christoffer Eg; Andresen, Gorm

    2015-01-01

    This paper presents the MATLAB simulation framework, DiSC, for verifying voltage control approaches in power distribution systems. It consists of real consumption data, stochastic models of renewable resources, flexible assets, electrical grid, and models of the underlying communication channels....... The simulation framework makes it possible to validate control approaches, and thus advance realistic and robust control algorithms for distribution system voltage control. Two examples demonstrate the potential voltage issues from penetration of renewables in the distribution grid, along with simple control...

  3. Effect of voltage sags on digitally controlled line connected switched-mode power supplies

    DEFF Research Database (Denmark)

    Török, Lajos; Munk-Nielsen, Stig

    2012-01-01

    Different voltage disorders like voltage fluctuations, sags, frequency variations may occur in the power supply networks due to different fault conditions. These deviations from normal operation affects in different ways the line connected devices. Standards were developed to protect and ensure...... of voltage sags is analyzed. Fault tolerant control algorithm was designed, implemented and is discussed. The fault conditions and their effects were investigated at different power levels....

  4. High voltage isolation transformer

    Science.gov (United States)

    Clatterbuck, C. H.; Ruitberg, A. P. (Inventor)

    1985-01-01

    A high voltage isolation transformer is provided with primary and secondary coils separated by discrete electrostatic shields from the surfaces of insulating spools on which the coils are wound. The electrostatic shields are formed by coatings of a compound with a low electrical conductivity which completely encase the coils and adhere to the surfaces of the insulating spools adjacent to the coils. Coatings of the compound also line axial bores of the spools, thereby forming electrostatic shields separating the spools from legs of a ferromagnetic core extending through the bores. The transformer is able to isolate a high constant potential applied to one of its coils, without the occurrence of sparking or corona, by coupling the coatings, lining the axial bores to the ferromagnetic core and by coupling one terminal of each coil to the respective coating encasing the coil.

  5. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  6. Computation of Steady State Nodal Voltages for Fast Security Assessment in Power Systems

    DEFF Research Database (Denmark)

    Møller, Jakob Glarbo; Jóhannsson, Hjörtur; Østergaard, Jacob

    2014-01-01

    Development of a method for real-time assess-ment of post-contingency nodal voltages is introduced. Linear network theory is applied in an algorithm that utilizes Thevenin equivalent representation of power systems as seen from every voltage-controlled node in a network. The method is evaluated b...

  7. Synchronised PWM Schemes for Three-level Inverters with Zero Common-mode Voltage

    DEFF Research Database (Denmark)

    Oleschuk, Valentin; Blaabjerg, Frede

    2002-01-01

    This paper presents results of analysis and comparison of novel synchronised schemes of pulsewidth modulation (PWM), applied to three-level voltage source inverters with control algorithms providing elimination of the common-mode voltage. The proposed approach is based on a new strategy of digital...

  8. Pulse-voltage fast generator

    International Nuclear Information System (INIS)

    Valeev, R.I.; Nikiforov, M.G.; Kharchenko, A.F.

    1988-01-01

    The design is described and the test results of a four-channel pulse-voltage generator with maximum output voltage 200 kV are presented. The measurement results of generator triggering time depending on the value and polarity of the triggering voltage pulse for different triggering circuits are presented. The tests have shown stable triggering of all four channels of the generator in the range up to 40 % from selfbreakdown voltage. The generator triggering delay in the given range is <25 ns, asynchronism in channel triggering is <±1 ns

  9. Voltage Dependence of Supercapacitor Capacitance

    Directory of Open Access Journals (Sweden)

    Szewczyk Arkadiusz

    2016-09-01

    Full Text Available Electronic Double-Layer Capacitors (EDLC, called Supercapacitors (SC, are electronic devices that are capable to store a relatively high amount of energy in a small volume comparing to other types of capacitors. They are composed of an activated carbon layer and electrolyte solution. The charge is stored on electrodes, forming the Helmholtz layer, and in electrolyte. The capacitance of supercapacitor is voltage- dependent. We propose an experimental method, based on monitoring of charging and discharging a supercapacitor, which enables to evaluate the charge in an SC structure as well as the Capacitance-Voltage (C-V dependence. The measurement setup, method and experimental results of charging/discharging commercially available supercapacitors in various voltage and current conditions are presented. The total charge stored in an SC structure is proportional to the square of voltage at SC electrodes while the charge on electrodes increases linearly with the voltage on SC electrodes. The Helmholtz capacitance increases linearly with the voltage bias while a sublinear increase of total capacitance was found. The voltage on SC increases after the discharge of electrodes due to diffusion of charges from the electrolyte to the electrodes. We have found that the recovery voltage value is linearly proportional to the initial bias voltage value.

  10. Particle swarm optimization for determining shortest distance to voltage collapse

    Energy Technology Data Exchange (ETDEWEB)

    Arya, L.D.; Choube, S.C. [Electrical Engineering Department, S.G.S.I.T.S. Indore, MP 452 003 (India); Shrivastava, M. [Electrical Engineering Department, Government Engineering College Ujjain, MP 456 010 (India); Kothari, D.P. [Centre for Energy Studies, Indian Institute of Technology, Delhi (India)

    2007-12-15

    This paper describes an algorithm for computing shortest distance to voltage collapse or determination of CSNBP using PSO technique. A direction along CSNBP gives conservative results from voltage security view point. This information is useful to the operator to steer the system away from this point by taking corrective actions. The distance to a closest bifurcation is a minimum of the loadability given a slack bus or participation factors for increasing generation as the load increases. CSNBP determination has been formulated as an optimization problem to be used in PSO technique. PSO is a new evolutionary algorithm (EA) which is population based inspired by the social behavior of animals such as fish schooling and birds flocking. It can handle optimization problems with any complexity since mechanization is simple with few parameters to be tuned. The developed algorithm has been implemented on two standard test systems. (author)

  11. Temporary over voltages in the high voltage networks

    International Nuclear Information System (INIS)

    Vukelja, Petar; Naumov, Radomir; Mrvic, Jovan; Minovski, Risto

    2001-01-01

    The paper treats the temporary over voltages that may arise in the high voltage networks as a result of: ground faults, loss of load, loss of one or two phases and switching operation. Based on the analysis, the measures for their limitation are proposed. (Original)

  12. A dulal-functional medium voltage level DVR to limit downstream fault currents

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Li, Yun Wei; Vilathgamuwa, D. Mahinda

    2007-01-01

    on the other parallel feeders connected to PCC. Furthermore, if not controlled properly, the DVR might also contribute to this PCC voltage sag in the process of compensating the missing voltage, thus further worsening the fault situation. To limit the flow of large line currents, and therefore restore the PCC...... situations. Controlling the DVR as a virtual inductor would also ensure zero real power absorption during the DVR compensation and thus minimize the stress in the dc link. Finally, the proposed fault current limiting algorithm has been tested in Matlab/Simulink simulation and experimentally on a medium......The dynamic voltage restorer (DVR) is a modern custom power device used in power distribution networks to protect consumers from sudden sags (and swells) in grid voltage. Implemented at medium voltage level, the DVR can be used to protect a group of medium voltage or low voltage consumers. However...

  13. A hybrid particle swarm optimization and genetic algorithm for closed-loop supply chain network design in large-scale networks

    DEFF Research Database (Denmark)

    Soleimani, Hamed; Kannan, Govindan

    2015-01-01

    Today, tracking the growing interest in closed-loop supply chain shown by both practitioners and academia is easily possible. There are many factors, which transform closed-loop supply chain issues into a unique and vital subject in supply chain management, such as environmental legislation...... is proposed and a complete validation process is undertaken using CPLEX and MATLAB software. In small instances, the global optimum points of CPLEX for the proposed hybrid algorithm are compared to genetic algorithm, and particle swarm optimization. Then, in small, mid, and large-size instances, performances...

  14. The use of an improved technique to reduce the variability of output voltage in real-time Fibre Bragg Grating based monitoring system

    Science.gov (United States)

    Vorathin, E.; Hafizi, Z. M.; Che Ghani, S. A.; Lim, K. S.; Aizzuddin, A. M.

    2017-10-01

    Fibre Bragg Grating (FBG) sensors have been widely utilized in the structural health monitoring (SHM) of structures. However, one of the main challenges of FBGs is the existence of inconsistency in output voltage during wavelength intensity demodulation utilizing photodetector (PD) to convert the light signal into digital voltage readings. Thus, the designation of this experimental work is to develop a robust FBG real-time monitoring system with the benefit of MATLAB graphical user interface (GUI) and voltage normalization algorithm to scale down the voltage inconsistency. Low-cost edge filter interrogation system has been practiced in the experimentation and splitter optical component is make use to reduce the intensity of the high power light source that leads to the formation of noise due to unwanted reflected wavelengths. The results revealed that with the advancement of the proposed monitoring system, the sensitivity of the FBG has been increased from 2.4 mV/N to 3.8 mV/N across the range of 50 N. The redundancy in output voltage variation data points has been reduced from 26 data/minute to 17 data/minute. The accuracy of the FBG in detecting the load induced falls in the acceptable range of total average error which is 1.38 %.

  15. Copper wire theft and high voltage electrical burns.

    Science.gov (United States)

    Francis, Eamon C; Shelley, Odhran P

    2014-01-01

    High voltage electrical burns are uncommon. However in the midst of our economic recession we are noticing an increasing number of these injuries. Copper wire is a valuable commodity with physical properties as an excellent conductor of electricity making it both ubiquitous in society and prized on the black market. We present two consecutive cases referred to the National Burns Unit who sustained life threatening injuries from the alleged theft of high voltage copper wire and its omnipresence on an international scale.

  16. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  17. A Voltage Quality Detection Method

    DEFF Research Database (Denmark)

    Chen, Zhe; Wei, Mu

    2008-01-01

    This paper presents a voltage quality detection method based on a phase-locked loop (PLL) technique. The technique can detect the voltage magnitude and phase angle of each individual phase under both normal and fault power system conditions. The proposed method has the potential to evaluate various...

  18. DATA SUMMARY REPORT SMALL SCALE MELTER TESTING OF HLW ALGORITHM GLASSES MATRIX1 TESTS VSL-07S1220-1 REV 0 7/25/07

    Energy Technology Data Exchange (ETDEWEB)

    KRUGER AA; MATLACK KS; PEGG IL

    2011-12-29

    Eight tests using different HLW feeds were conducted on the DM100-BL to determine the effect of variations in glass properties and feed composition on processing rates and melter conditions (off-gas characteristics, glass processing, foaming, cold cap, etc.) at constant bubbling rate. In over seven hundred hours of testing, the property extremes of glass viscosity, electrical conductivity, and T{sub 1%}, as well as minimum and maximum concentrations of several major and minor glass components were evaluated using glass compositions that have been tested previously at the crucible scale. Other parameters evaluated with respect to glass processing properties were +/-15% batching errors in the addition of glass forming chemicals (GFCs) to the feed, and variation in the sources of boron and sodium used in the GFCs. Tests evaluating batching errors and GFC source employed variations on the HLW98-86 formulation (a glass composition formulated for HLW C-106/AY-102 waste and processed in several previous melter tests) in order to best isolate the effect of each test variable. These tests are outlined in a Test Plan that was prepared in response to the Test Specification for this work. The present report provides summary level data for all of the tests in the first test matrix (Matrix 1) in the Test Plan. Summary results from the remaining tests, investigating minimum and maximum concentrations of major and minor glass components employing variations on the HLW98-86 formulation and glasses generated by the HLW glass formulation algorithm, will be reported separately after those tests are completed. The test data summarized herein include glass production rates, the type and amount of feed used, a variety of measured melter parameters including temperatures and electrode power, feed sample analysis, measured glass properties, and gaseous emissions rates. More detailed information and analysis from the melter tests with complete emission chemistry, glass durability, and

  19. Voltage Controlled Dynamic Demand Response

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Mahat, Pukar

    2013-01-01

    Future power system is expected to be characterized by increased penetration of intermittent sources. Random and rapid fluctuations in demands together with intermittency in generation impose new challenges for power balancing in the existing system. Conventional techniques of balancing by large...... central or dispersed generations might not be sufficient for future scenario. One of the effective methods to cope with this scenario is to enable demand response. This paper proposes a dynamic voltage regulation based demand response technique to be applied in low voltage (LV) distribution feeders....... An adaptive dynamic model has been developed to determine composite voltage dependency of an aggregated load on feeder level. Following the demand dispatch or control signal, optimum voltage setting at the LV substation is determined based on the voltage dependency of the load. Furthermore, a new technique...

  20. Transient voltage oscillations in coils

    International Nuclear Information System (INIS)

    Chowdhuri, P.

    1985-01-01

    Magnet coils may be excited into internal voltage oscillations by transient voltages. Such oscillations may electrically stress the magnet's dielectric components to many times its normal stress. This may precipitate a dielectric failure, and the attendant prolonged loss of service and costly repair work. Therefore, it is important to know the natural frequencies of oscillations of a magnet during the design stage, and to determine whether the expected switching transient voltages can excite the magnet into high-voltage internal oscillations. The series capacitance of a winding significantly affects its natural frequencies. However, the series capacitance is difficult to calculate, because it may comprise complex capacitance network, consisting of intra- and inter-coil turn-to-turn capacitances of the coil sections. A method of calculating the series capacitance of a winding is proposed. This method is rigorous but simple to execute. The time-varying transient voltages along the winding are also calculated

  1. SVPWM Technique with Varying DC-Link Voltage for Common Mode Voltage Reduction in a Matrix Converter and Analytical Estimation of its Output Voltage Distortion

    Science.gov (United States)

    Padhee, Varsha

    converter. This conceivably aids the sizing and design of output passive filters. An analytical estimation method has been presented to achieve this purpose for am IMC. Knowledge of the fundamental component in output voltage can be utilized to calculate its Total Harmonic Distortion (THD). The effectiveness of the proposed SVPWM algorithms and the analytical estimation technique is substantiated by simulations in MATLAB / Simulink and experiments on a laboratory prototype of the IMC. Proper comparison plots have been provided to contrast the performance of the proposed methods with the conventional SVPWM method. The behavior of output voltage distortion and CMV with variation in operating parameters like modulation index and output frequency has also been analyzed.

  2. Optimized Scheduling of Smart Meter Data Access for Real-time Voltage Quality Monitoring

    DEFF Research Database (Denmark)

    Kemal, Mohammed Seifu; Olsen, Rasmus Løvenstein; Schwefel, Hans-Peter

    2018-01-01

    Abstract—Active low-voltage distribution grids that support high integration of distributed generation such as photovoltaics and wind turbines require real-time voltage monitoring. At the same time, countries in Europe such as Denmark have close to 100% rollout of smart metering infrastructure....... The metering infrastructure has limitations to provide real-time measurements with small-time granularity. This paper presents an algorithm for optimized scheduling of smart meter data access to provide real-time voltage quality monitoring. The algorithm is analyzed using a real distribution grid in Denmark...

  3. DC Voltage Control and Power-Sharing of Multi-Terminal DC Grids Based on Optimal DC Power Flow and Flexible Voltage Droop Strategy

    Directory of Open Access Journals (Sweden)

    F. Azma

    2015-06-01

    Full Text Available This paper develops an effective control framework for DC voltage control and power-sharing of multi-terminal DC (MTDC grids based on an optimal power flow (OPF procedure and the voltage-droop control. In the proposed approach, an OPF algorithm is executed at the secondary level to find optimal reference of DC voltages and active powers of all voltage-regulating converters. Then, the voltage droop characteristics of voltage-regulating converters, at the primary level, are tuned based on the OPF results such that the operating point of the MTDC grid lies on the voltage droop characteristics. Consequently, the optimally-tuned voltage droop controller leads to the optimal operation of the MTDC grid. In case of variation in load or generation of the grid, a new stable operating point is achieved based on the voltage droop characteristics. By execution of a new OPF, the voltage droop characteristics are re-tuned for optimal operation of the MTDC grid after the occurrence of the load or generation variations. The results of simulation on a grid inspired by CIGRE B4 DC grid test system demonstrate efficient grid performance under the proposed control strategy.

  4. A backtracking evolutionary algorithm for power systems

    Directory of Open Access Journals (Sweden)

    Chiou Ji-Pyng

    2017-01-01

    Full Text Available This paper presents a backtracking variable scaling hybrid differential evolution, called backtracking VSHDE, for solving the optimal network reconfiguration problems for power loss reduction in distribution systems. The concepts of the backtracking, variable scaling factor, migrating, accelerated, and boundary control mechanism are embedded in the original differential evolution (DE to form the backtracking VSHDE. The concepts of the backtracking and boundary control mechanism can increase the population diversity. And, according to the convergence property of the population, the scaling factor is adjusted based on the 1/5 success rule of the evolution strategies (ESs. A larger population size must be used in the evolutionary algorithms (EAs to maintain the population diversity. To overcome this drawback, two operations, acceleration operation and migrating operation, are embedded into the proposed method. The feeder reconfiguration of distribution systems is modelled as an optimization problem which aims at achieving the minimum loss subject to voltage and current constraints. So, the proper system topology that reduces the power loss according to a load pattern is an important issue. Mathematically, the network reconfiguration system is a nonlinear programming problem with integer variables. One three-feeder network reconfiguration system from the literature is researched by the proposed backtracking VSHDE method and simulated annealing (SA. Numerical results show that the perfrmance of the proposed method outperformed the SA method.

  5. A support vector machine (SVM) based voltage stability classifier

    Energy Technology Data Exchange (ETDEWEB)

    Dosano, R.D.; Song, H. [Kunsan National Univ., Kunsan, Jeonbuk (Korea, Republic of); Lee, B. [Korea Univ., Seoul (Korea, Republic of)

    2007-07-01

    Power system stability has become even more complex and critical with the advent of deregulated energy markets and the growing desire to completely employ existing transmission and infrastructure. The economic pressure on electricity markets forces the operation of power systems and components to their limit of capacity and performance. System conditions can be more exposed to instability due to greater uncertainty in day to day system operations and increase in the number of potential components for system disturbances potentially resulting in voltage stability. This paper proposed a support vector machine (SVM) based power system voltage stability classifier using local measurements of voltage and active power of load. It described the procedure for fast classification of long-term voltage stability using the SVM algorithm. The application of the SVM based voltage stability classifier was presented with reference to the choice of input parameters; input data preconditioning; moving window for feature vector; determination of learning samples; and other considerations in SVM applications. The paper presented a case study with numerical examples of an 11-bus test system. The test results for the feasibility study demonstrated that the classifier could offer an excellent performance in classification with time-series measurements in terms of long-term voltage stability. 9 refs., 14 figs.

  6. Active Power Filter DC Bus Voltage Piecewise Reaching Law Variable Structure Control

    Directory of Open Access Journals (Sweden)

    Baolian Liu

    2014-01-01

    Full Text Available The DC bus voltage stability control is one key technology to ensure that Active Power Filter (APF operates stably. The external disturbances such as power grid and load fluctuation and the system parameters changing may affect the stability of APF DC bus voltage and the normal operation of APF. The mathematical model of DC bus voltage is established according to power balance principle and a DC bus voltage piecewise reaching law variable structure control algorithm is proposed to solve the above problem, and the design method is given. The simulation and experiment results proved that the proposed variable structure control algorithm can eliminate the chattering problem existing in traditional variable structure control effectively, is insensitive to system disturbance, and has good robustness and fast dynamic response speed and stable DC bus voltage with small fluctuation. The above advantages ensure the compensation effect of APF.

  7. Power-MOSFET Voltage Regulator

    Science.gov (United States)

    Miller, W. N.; Gray, O. E.

    1982-01-01

    Ninety-six parallel MOSFET devices with two-stage feedback circuit form a high-current dc voltage regulator that also acts as fully-on solid-state switch when fuel-cell out-put falls below regulated voltage. Ripple voltage is less than 20 mV, transient recovery time is less than 50 ms. Parallel MOSFET's act as high-current dc regulator and switch. Regulator can be used wherever large direct currents must be controlled. Can be applied to inverters, industrial furnaces photovoltaic solar generators, dc motors, and electric autos.

  8. A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography.

    Science.gov (United States)

    Grassmann, Felix; Mengelkamp, Judith; Brandl, Caroline; Harsch, Sebastian; Zimmermann, Martina E; Linkohr, Birgit; Peters, Annette; Heid, Iris M; Palm, Christoph; Weber, Bernhard H F

    2018-04-10

    Age-related macular degeneration (AMD) is a common threat to vision. While classification of disease stages is critical to understanding disease risk and progression, several systems based on color fundus photographs are known. Most of these require in-depth and time-consuming analysis of fundus images. Herein, we present an automated computer-based classification algorithm. Algorithm development for AMD classification based on a large collection of color fundus images. Validation is performed on a cross-sectional, population-based study. We included 120 656 manually graded color fundus images from 3654 Age-Related Eye Disease Study (AREDS) participants. AREDS participants were >55 years of age, and non-AMD sight-threatening diseases were excluded at recruitment. In addition, performance of our algorithm was evaluated in 5555 fundus images from the population-based Kooperative Gesundheitsforschung in der Region Augsburg (KORA; Cooperative Health Research in the Region of Augsburg) study. We defined 13 classes (9 AREDS steps, 3 late AMD stages, and 1 for ungradable images) and trained several convolution deep learning architectures. An ensemble of network architectures improved prediction accuracy. An independent dataset was used to evaluate the performance of our algorithm in a population-based study. κ Statistics and accuracy to evaluate the concordance between predicted and expert human grader classification. A network ensemble of 6 different neural net architectures predicted the 13 classes in the AREDS test set with a quadratic weighted κ of 92% (95% confidence interval, 89%-92%) and an overall accuracy of 63.3%. In the independent KORA dataset, images wrongly classified as AMD were mainly the result of a macular reflex observed in young individuals. By restricting the KORA analysis to individuals >55 years of age and prior exclusion of other retinopathies, the weighted and unweighted κ increased to 50% and 63%, respectively. Importantly, the algorithm

  9. Voltage balancing strategies for serial connection of microbial fuel cells

    Science.gov (United States)

    Khaled, Firas; Ondel, Olivier; Allard, Bruno; Buret, François

    2015-07-01

    The microbial fuel cell (MFC) converts electrochemically organic matter into electricity by means of metabolisms of bacteria. The MFC power output is limited by low voltage and low current characteristics in the range of microwatts or milliwatts per litre. In order to produce a sufficient voltage level (>1.5 V) and sufficient power to supply real applications such as autonomous sensors, it is necessary to either scale-up one single unit or to connect multiple units together. Many topologies of connection are possible as the serial association to improve the output voltage, or the parallel connection to improve the output current or the series/parallel connection to step-up both voltage and current. The association of MFCs in series is a solution to increase the voltage to an acceptable value and to mutualize the unit's output power. The serial association of a large number of MFCs presents several issues. The first one is the hydraulic coupling among MFCs when they share the same substrate. The second one is the dispersion between generators that lead to a non-optimal stack efficiency because the maximum power point (MPP) operation of all MFCs is not permitted. Voltage balancing is a solution to compensate non-uniformities towards MPP. This paper presents solutions to improve the efficiency of a stack of serially connected MFCs through a voltage-balancing circuit. Contribution to the topical issue "Electrical Engineering Symposium (SGE 2014)", edited by Adel Razek

  10. Modular High Voltage Power Supply

    Energy Technology Data Exchange (ETDEWEB)

    Newell, Matthew R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-18

    The goal of this project is to develop a modular high voltage power supply that will meet the needs of safeguards applications and provide a modular plug and play supply for use with standard electronic racks.

  11. Power converters for medium voltage networks

    CERN Document Server

    Islam, Md Rabiul; Zhu, Jianguo

    2014-01-01

    This book examines a number of topics, mainly in connection with advances in semiconductor devices and magnetic materials and developments in medium and large-scale renewable power plant technologies, grid integration techniques and new converter topologies, including advanced digital control systems for medium-voltage networks. The book's individual chapters provide an extensive compilation of fundamental theories and in-depth information on current research and development trends, while also exploring new approaches to overcoming some critical limitations of conventional grid integration te

  12. PSO Algorithm for an Optimal Power Controller in a Microgrid

    Science.gov (United States)

    Al-Saedi, W.; Lachowicz, S.; Habibi, D.; Bass, O.

    2017-07-01

    This paper presents the Particle Swarm Optimization (PSO) algorithm to improve the quality of the power supply in a microgrid. This algorithm is proposed for a real-time selftuning method that used in a power controller for an inverter based Distributed Generation (DG) unit. In such system, the voltage and frequency are the main control objectives, particularly when the microgrid is islanded or during load change. In this work, the PSO algorithm is implemented to find the optimal controller parameters to satisfy the control objectives. The results show high performance of the applied PSO algorithm of regulating the microgrid voltage and frequency.

  13. Reliability criteria for voltage stability

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Carson W; Silverstein, Brian L [Bonneville Power Administration, Portland, OR (United States)

    1994-12-31

    In face of costs pressures, there is need to allocate scare resources more effectively in order to achieve voltage stability. This naturally leads to development of probabilistic criteria and notions of rick management. In this paper it is presented a discussion about criteria for long term voltage stability limited to the case in which the time frames are topically several minutes. (author) 14 refs., 1 fig.

  14. High voltage distributions in RPCs

    International Nuclear Information System (INIS)

    Inoue, Y.; Muranishi, Y.; Nakamura, M.; Nakano, E.; Takahashi, T.; Teramoto, Y.

    1996-01-01

    High voltage distributions on the inner surfaces of RPCs electrodes were calculated by using a two-dimensional resistor network model. The calculated result shows that the surface resistivity of the electrodes should be high, compared to their volume resistivity, to get a uniform high voltage over the surface. Our model predicts that the rate capabilities of RPCs should be inversely proportional to the thickness of the electrodes if the ratio of surface-to-volume resistivity is low. (orig.)

  15. Unequal Input Voltages Distribution Between the Serial Connected Halfbridges

    Directory of Open Access Journals (Sweden)

    Radovan Ovcarcik

    2006-01-01

    Full Text Available This paper describes a topology of DC-DC converter consisting in two serial connected half-bridges. Secondary circuit is realized like a conventional full-wave rectifier. The main advantage of this topology is the possibility of dividing the input voltage between the half-bridges. The converter is controlled using the phase-shift modulation, which allows a ZVSoperation mode. The voltage unbalance between the inputs causes an important problem of the presented topology. It is necessary to avoid it by the control algorithm, which is described in the text. The practical results show a zero voltage switching technique and the limits of the chosen topology and of the control.

  16. Cavity Voltage Phase Modulation MD blocks 3 and 4

    CERN Document Server

    Mastoridis, T; Butterworth, A; Molendijk, J; Tuckmantel, J

    2013-01-01

    The LHC RF/LLRF system is currently setup for extremely stable RF voltage to minimize transient beam loading effects. The present scheme cannot be extended beyond nominal beam current since the demanded power would push the klystrons to saturation. For beam currents above nominal (and possibly earlier), the cavity phase modulation by the beam (transient beam loading) will not be corrected, but the strong RF feedback and One-Turn Delay feedback will still be active for RF loop and beam stability in physics. To achieve this, the voltage set point should be adapted for each bunch. The goal of these MDs was to test thefirmware version of an iterative algorithm that adjusts the voltage set point to achieve the optimal phase modulation for klystron forward power considerations.

  17. Macroeconomic Assessment of Voltage Sags

    Directory of Open Access Journals (Sweden)

    Sinan Küfeoğlu

    2016-12-01

    Full Text Available The electric power sector has changed dramatically since the 1980s. Electricity customers are now demanding uninterrupted and high quality service from both utilities and authorities. By becoming more and more dependent on the voltage sensitive electronic equipment, the industry sector is the one which is affected the most by voltage disturbances. Voltage sags are one of the most crucial problems for these customers. The utilities, on the other hand, conduct cost-benefit analyses before going through new investment projects. At this point, understanding the costs of voltage sags become imperative for planning purposes. The characteristics of electric power consumption and hence the susceptibility against voltage sags differ considerably among different industry subsectors. Therefore, a model that will address the estimation of worth of electric power reliability for a large number of customer groups is necessary. This paper introduces a macroeconomic model to calculate Customer Voltage Sag Costs (CVSCs for the industry sector customers. The proposed model makes use of analytical data such as value added, annual energy consumption, working hours, and average outage durations and provides a straightforward, credible, and easy to follow methodology for the estimation of CVSCs.

  18. A matter of quantum voltages

    Energy Technology Data Exchange (ETDEWEB)

    Sellner, Bernhard; Kathmann, Shawn M., E-mail: Shawn.Kathmann@pnnl.gov [Physical Sciences Division, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States)

    2014-11-14

    Voltages inside matter are relevant to crystallization, materials science, biology, catalysis, and aqueous chemistry. The variation of voltages in matter can be measured by experiment, however, modern supercomputers allow the calculation of accurate quantum voltages with spatial resolutions of bulk systems well beyond what can currently be measured provided a sufficient level of theory is employed. Of particular interest is the Mean Inner Potential (V{sub o}) – the spatial average of these quantum voltages referenced to the vacuum. Here we establish a protocol to reliably evaluate V{sub o} from quantum calculations. Voltages are very sensitive to the distribution of electrons and provide metrics to understand interactions in condensed phases. In the present study, we find excellent agreement with measurements of V{sub o} for vitrified water and salt crystals and demonstrate the impact of covalent and ionic bonding as well as intermolecular/atomic interactions. Certain aspects in this regard are highlighted making use of simple model systems/approximations. Furthermore, we predict V{sub o} as well as the fluctuations of these voltages in aqueous NaCl electrolytes and characterize the changes in their behavior as the resolution increases below the size of atoms.

  19. The WACMOS-ET project – Part 1: Tower-scale evaluation of four remote-sensing-based evapotranspiration algorithms

    KAUST Repository

    Michel, D.; Jimé nez, C.; Miralles, Diego G.; Jung, M.; Hirschi, M.; Ershadi, Ali; Martens, B.; McCabe, Matthew; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Ferná ndez-Prieto, D.

    2016-01-01

    The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R2  =  0.67), the agreement of the satellite-based ET estimates is only marginally lower (R2  =  0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a

  20. The WACMOS-ET project – Part 1: Tower-scale evaluation of four remote-sensing-based evapotranspiration algorithms

    KAUST Repository

    Michel, D.

    2016-02-23

    The WAter Cycle Multi-mission Observation Strategy – EvapoTranspiration (WACMOS-ET) project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run four established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODerate resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in situ meteorological data from 24 FLUXNET towers were used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed on several timescales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement with the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R2  =  0.67), the agreement of the satellite-based ET estimates is only marginally lower (R2  =  0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs resampled to a

  1. Power Quality Problems Mitigation using Dynamic Voltage Restorer in Egypt Thermal Research Reactor (ETRR-2)

    International Nuclear Information System (INIS)

    Kandil, T.; Ayad, N.M.; Abdel Haleam, A.; Mahmoud, M.

    2013-01-01

    Egypt thermal research reactor (ETRR-2) was subjected to several Power Quality Problems such as voltage sags/swells, harmonics distortion, and short interruption. ETRR-2 encompasses a wide range of loads which are very sensitive to voltage variations and this leads to several unplanned shutdowns of the reactor due to trigger of the Reactor Protection System (RPS). The Dynamic Voltage Restorer (DVR) has recently been introduced to protect sensitive loads from voltage sags and other voltage disturbances. It is considered as one of the most efficient and effective solution. Its appeal includes smaller size and fast dynamic response to the disturbance. This paper describes a proposal of a DVR to improve power quality in ETRR-2 electrical distribution systems . The control of the compensation voltage is based on d-q-o algorithm. Simulation is carried out by Matlab/Simulink to verify the performance of the proposed method

  2. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  3. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  4. Coordinated Voltage Control in Distribution Network with the Presence of DGs and Variable Loads Using Pareto and Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    José Raúl Castro

    2016-02-01

    Full Text Available This paper presents an efficient algorithm to solve the multi-objective (MO voltage control problem in distribution networks. The proposed algorithm minimizes the following three objectives: voltage variation on pilot buses, reactive power production ratio deviation, and generator voltage deviation. This work leverages two optimization techniques: fuzzy logic to find the optimum value of the reactive power of the distributed generation (DG and Pareto optimization to find the optimal value of the pilot bus voltage so that this produces lower losses under the constraints that the voltage remains within established limits. Variable loads and DGs are taken into account in this paper. The algorithm is tested on an IEEE 13-node test feeder and the results show the effectiveness of the proposed model.

  5. Mitigation of voltage sags in the distribution system with dynamic voltage restorer

    International Nuclear Information System (INIS)

    Viglas, D.; Belan, A.

    2012-01-01

    Dynamic voltage restorer is a custom power device that is used to improve voltage sags or swells in electrical distribution system. The components of the Dynamic Voltage Restorer consist of injection transformers, voltage source inverter, passive filters and energy storage. The main function of the Dynamic voltage restorer is used to inject three phase voltage in series and in synchronism with the grid voltages in order to compensate voltage disturbances. This article deals with mitigation of voltage sags caused by three-phase short circuit. Dynamic voltage restorer is modelled in MATLAB/Simulink. (Authors)

  6. Distributed stability control using intelligent voltage-margin relay

    Energy Technology Data Exchange (ETDEWEB)

    Wiszniewski, A.; Rebizant, W. [Wroclaw Univ. of Technology (Poland); Klimek, A. [Powertech Labs Inc., Surrey, BC (Canada)

    2010-07-01

    This paper presented an intelligent relay that operates if the load to source impedance ratio decreases to a level that is dangerously close to the stability limit, which leads to power system blackouts. The intelligent voltage-margin/difference relay installed at receiving substations automatically initiates action if the voltage stability margin drops to a dangerously low level. The relay decides if the tap changing devices are to be blocked and if under-voltage load shedding should be initiated, thereby mitigating an evolving instability. The intelligent relay has two levels of operation. At the first stage, which corresponds to the higher load to source impedance ratio, the relay initiates blocking of the tap changer. At the second stage, corresponding to the lower source to load impedance ratio, load shedding is initiated. The relay operates when the load to source impedance ratio reaches a certain predetermined level, but it does not depend either on the level of the source voltage or on the difference of source and load impedance phase angles. The algorithm for the relay is relatively simple and uses only locally available signals. Consequently, the transformer is well controlled to eliminate the cases of voltage instability. 6 refs., 7 figs.

  7. Massively parallel and linear-scaling algorithm for second-order Møller–Plesset perturbation theory applied to the study of supramolecular wires

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro

    2017-01-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventiona...

  8. A Half-Bridge Voltage Balancer with New Controller for Bipolar DC Distribution Systems

    Directory of Open Access Journals (Sweden)

    Byung-Moon Han

    2016-03-01

    Full Text Available This paper proposes a half-bridge voltage balancer with a new controller for bipolar DC distribution systems. The proposed control scheme consists of two cascaded Proportional Integral (PI controls rather than one PI control for balancing the pole voltage. In order to confirm the excellence of voltage balancing performance, a typical bipolar DC distribution system including a half-bridge voltage balancer with proposed controller was analyzed by computer simulations. Experiments with a scaled prototype were also carried out to confirm the simulation results. The half-bridge voltage balancer with proposed controller shows better performance than the half-bridge voltage balancer with one PI control for balancing the pole voltage.

  9. Solid-state high voltage modulator and its application to rf source high voltage power supplies

    International Nuclear Information System (INIS)

    Tooker, J.F.; Huynh, P.; Street, R.W.

    2009-01-01

    A solid-state high voltage modulator is described in which series-connected insulated-gate bipolar transistors (IGBTs) are switched at a fixed frequency by a pulse width modulation (PWM) regulator, that adjusts the pulse width to control the voltage out of an inductor-capacitor filter network. General Atomics proposed the HV power supply (HVPS) topology of multiple IGBT modulators connected to a common HVdc source for the large number of 1 MW klystrons in the linear accelerator of the Accelerator Production of Tritium project. The switching of 24 IGBTs to obtain 20 kVdc at 20 A for short pulses was successfully demonstrated. This effort was incorporated into the design of a -70 kV, 80 A, IGBT modulator, and in a short-pulse test 12 IGBTs regulated -5 kV at 50 A under PWM control. These two tests confirm the practicality of solid-state IGBT modulators to regulate high voltage at reasonable currents. Tokamaks such as ITER require large rf heating and current drive systems with multiple rf sources. A HVPS topology is presented that readily adapts to the three rf heating systems on ITER. To take advantage of the known economy of scale for power conversion equipment, a single HVdc source feeds multiple rf sources. The large power conversion equipment, which is located outside, converts the incoming utility line voltage directly to the HVdc needed for the class of rf sources connected to it, to further reduce cost. The HVdc feeds a set of IGBT modulators, one for each rf source, to independently control the voltage applied to each source, maximizing operational flexibility. Only the modulators are indoors, close to the rf sources, minimizing the use of costly near-tokamak floor space.

  10. Development of a New Cascade Voltage-Doubler for Voltage Multiplication

    OpenAIRE

    Toudeshki, Arash; Mariun, Norman; Hizam, Hashim; Abdul Wahab, Noor Izzri

    2014-01-01

    For more than eight decades, cascade voltage-doubler circuits are used as a method to produce DC output voltage higher than the input voltage. In this paper, the topological developments of cascade voltage-doublers are reviewed. A new circuit configuration for cascade voltage-doubler is presented. This circuit can produce a higher value of the DC output voltage and better output quality compared to the conventional cascade voltage-doubler circuits, with the same number of stages.

  11. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  12. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  13. A new approach for optimum DG placement and sizing based on voltage stability maximization and minimization of power losses

    International Nuclear Information System (INIS)

    Aman, M.M.; Jasmon, G.B.; Bakar, A.H.A.; Mokhlis, H.

    2013-01-01

    Highlights: • A new algorithm is proposed for optimum DG placement and sizing.• I 2 R losses minimization and voltage stability maximization is considered in fitness function.• Bus voltage stability and line stability is considered in voltage stability maximization.• Multi-objective PSO is used to solve the problem.• Proposed method is compared with analytical and grid search algorithm. - Abstract: Distributed Generation (DG) placement on the basis of minimization of losses and maximization of system voltage stability are two different approaches, discussed in research. In the new proposed algorithm, a multi-objective approach is used to combine the both approaches together. Minimization of power losses and maximization of voltage stability due to finding weakest voltage bus as well as due to weakest link in the system are considered in the fitness function. Particle Swarm Optimization (PSO) algorithm is used in this paper to solve the multi-objective problem. This paper will also compare the propose method with existing DG placement methods. From results, the proposed method is found more advantageous than the previous work in terms of voltage profile improvement, maximization of system loadability, reduction in power system losses and maximization of bus and line voltage stability. The results are validated on 12-bus, 30-bus, 33-bus and 69-bus radial distribution networks and also discussed in detailed

  14. Voltage and temperature dependence of the grain boundary tunneling magnetoresistance in manganites

    OpenAIRE

    Hoefener, C.; Philipp, J. B.; Klein, J.; Alff, L.; Marx, A.; Buechner, B.; Gross, R.

    2000-01-01

    We have performed a systematic analysis of the voltage and temperature dependence of the tunneling magnetoresistance (TMR) of grain boundaries (GB) in the manganites. We find a strong decrease of the TMR with increasing voltage and temperature. The decrease of the TMR with increasing voltage scales with an increase of the inelastic tunneling current due to multi-step inelastic tunneling via localized defect states in the tunneling barrier. This behavior can be described within a three-current...

  15. Design of shielded voltage divider for impulse voltage measurement

    International Nuclear Information System (INIS)

    Kato, Shohei; Kouno, Teruya; Maruyama, Yoshio; Kikuchi, Koji.

    1976-01-01

    The dividers used for the study of the insulation and electric discharge phenomena in high voltage equipments have the problems of the change of response characteristics owing to adjacent bodies and of induced noise. To improve the characteristics, the enclosed type divider shielded with metal has been investigated, and the divider of excellent response has been obtained by adopting the frequency-separating divider system, which is divided into two parts, resistance divider (lower frequency region) and capacitance divider (higher frequency region), for avoiding to degrade the response. Theoretical analysis was carried out in the cases that residual inductance can be neglected or can not be neglected in the small capacitance divider, and that the connecting wires are added. Next, the structure of the divider and the design of the electric field for the divider manufactured on the basis of the theory are described. The response characteristics were measured. The results show that 1 MV impulse voltage can be measured within the response time of 10 ns. Though this divider aims at the impulse voltage, the duration time of which is about that of standard lightning impulse, in view of the heat capacity because of the input resistance of 10.5 kΩ, it is expected that the divider can be applied to the voltage of longer duration time by increasing the input resistance in future. (Wakatsuki, Y.)

  16. The high voltage homopolar generator

    Science.gov (United States)

    Price, J. H.; Gully, J. H.; Driga, M. D.

    1986-11-01

    System and component design features of proposed high voltage homopolar generator (HVHPG) are described. The system is to have an open circuit voltage of 500 V, a peak output current of 500 kA, 3.25 MJ of stored inertial energy and possess an average magnetic-flux density of 5 T. Stator assembly components are discussed, including the stator, mount structure, hydrostatic bearings, main and motoring brushgears and rotor. Planned operational procedures such as monitoring the rotor to full speed and operation with a superconducting field coil are delineated.

  17. Resilient architecture design for voltage variation

    CERN Document Server

    Reddi, Vijay Janapa

    2013-01-01

    Shrinking feature size and diminishing supply voltage are making circuits sensitive to supply voltage fluctuations within the microprocessor, caused by normal workload activity changes. If left unattended, voltage fluctuations can lead to timing violations or even transistor lifetime issues that degrade processor robustness. Mechanisms that learn to tolerate, avoid, and eliminate voltage fluctuations based on program and microarchitectural events can help steer the processor clear of danger, thus enabling tighter voltage margins that improve performance or lower power consumption. We describe

  18. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    Science.gov (United States)

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  19. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  20. Simple mechanical parameters identification of induction machine using voltage sensor only

    International Nuclear Information System (INIS)

    Horen, Yoram; Strajnikov, Pavel; Kuperman, Alon

    2015-01-01

    Highlights: • A simple low cost algorithm for induction motor mechanical parameters estimation is proposed. • Voltage sensing only is performed; speed sensor is not required. • The method is suitable for both wound rotor and squirrel cage motors. - Abstract: A simple low cost algorithm for induction motor mechanical parameters estimation without speed sensor is presented in this paper. Estimation is carried out by recording stator terminal voltage during natural braking and subsequent offline curve fitting. The algorithm allows accurately reconstructing mechanical time constant as well as loading torque speed dependency. Although the mathematical basis of the presented method is developed for wound rotor motors, it is shown to be suitable for squirrel cage motors as well. The algorithm is first tested by reconstruction of simulation model parameters and then by processing measurement results of several motors. Simulation and experimental results support the validity of the proposed algorithm

  1. A New High Frequency Injection Method Based on Duty Cycle Shifting without Maximum Voltage Magnitude Loss

    DEFF Research Database (Denmark)

    Wang, Dong; Lu, Kaiyuan; Rasmussen, Peter Omand

    2015-01-01

    The conventional high frequency signal injection method is to superimpose a high frequency voltage signal to the commanded stator voltage before space vector modulation. Therefore, the magnitude of the voltage used for machine torque production is limited. In this paper, a new high frequency...... amplitude. This may be utilized to develop new position estimation algorithm without involving the inductance in the medium to high speed range. As an application example, a developed inductance independent position estimation algorithm using the proposed high frequency injection method is applied to drive...... injection method, in which high frequency signal is generated by shifting the duty cycle between two neighboring switching periods, is proposed. This method allows injecting a high frequency signal at half of the switching frequency without the necessity to sacrifice the machine fundamental voltage...

  2. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ming, W.Q.; Chen, J.H., E-mail: jhchen123@hnu.edu.cn

    2013-11-15

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations.

  3. Validities of three multislice algorithms for quantitative low-energy transmission electron microscopy

    International Nuclear Information System (INIS)

    Ming, W.Q.; Chen, J.H.

    2013-01-01

    Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations

  4. New algorithms and pulse-processing units in radioisotope instruments

    International Nuclear Information System (INIS)

    Antonjak, V.; Gonsjorowski, L.; Jastschuk, E.; Kwasnewski, T.

    1981-01-01

    Three new algorithms and the corresponding electronic circuits are described, beginning with the automatic gain stabilisation circuit for scintillation counters. The signal obtained as the difference between two pulse trains from amplitude discriminators has been used for photomultiplier high voltage control. Furthermore, a real time digital filter for random pulse trains is presented, showing that the variance of pulse trains is decreasing after passing the filter. The block diagram, principle of operation and basic features of the filter are given. Finally, a digital circuit for polynomial linearization of the scale function in radioisotope instruments is described. Again, the block diagram of pulse train processing, the mode of operation and programming method are given. (author)

  5. Voltage Weak DC Distribution Grids

    NARCIS (Netherlands)

    Hailu, T.G.; Mackay, L.J.; Ramirez Elizondo, L.M.; Ferreira, J.A.

    2017-01-01

    This paper describes the behavior of voltage weak DC distribution systems. These systems have relatively small system capacitance. The size of system capacitance, which stores energy, has a considerable effect on the value of fault currents, control complexity, and system reliability. A number of

  6. Nonlinear electrokinetics at large voltages

    Energy Technology Data Exchange (ETDEWEB)

    Bazant, Martin Z [Department of Chemical Engineering and Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Sabri Kilic, Mustafa; Ajdari, Armand [Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Storey, Brian D [Franklin W Olin College of Engineering, Needham, MA 02492 (United States)], E-mail: bazant@mit.edu

    2009-07-15

    The classical theory of electrokinetic phenomena assumes a dilute solution of point-like ions in chemical equilibrium with a surface whose double-layer voltage is of order the thermal voltage, k{sub B}T/e=25 mV. In nonlinear 'induced-charge' electrokinetic phenomena, such as ac electro-osmosis, several volts {approx}100k{sub B}T/e are applied to the double layer, and the theory breaks down and cannot explain many observed features. We argue that, under such a large voltage, counterions 'condense' near the surface, even for dilute bulk solutions. Based on simple models, we predict that the double-layer capacitance decreases and the electro-osmotic mobility saturates at large voltages, due to steric repulsion and increased viscosity of the condensed layer, respectively. The former suffices to explain observed high-frequency flow reversal in ac electro-osmosis; the latter leads to a salt concentration dependence of induced-charge flows comparable to experiments, although a complete theory is still lacking.

  7. High voltage power network construction

    CERN Document Server

    Harker, Keith

    2018-01-01

    This book examines the key requirements, considerations, complexities and constraints relevant to the task of high voltage power network construction, from design, finance, contracts and project management to installation and commissioning, with the aim of providing an overview of the holistic end to end construction task in a single volume.

  8. Voltage control of ferromagnetic resonance

    Directory of Open Access Journals (Sweden)

    Ziyao Zhou

    2016-06-01

    Full Text Available Voltage control of magnetism in multiferroics, where the ferromagnetism and ferroelectricity are simultaneously exhibiting, is of great importance to achieve compact, fast and energy efficient voltage controllable magnetic/microwave devices. Particularly, these devices are widely used in radar, aircraft, cell phones and satellites, where volume, response time and energy consumption is critical. Researchers realized electric field tuning of magnetic properties like magnetization, magnetic anisotropy and permeability in varied multiferroic heterostructures such as bulk, thin films and nanostructure by different magnetoelectric (ME coupling mechanism: strain/stress, interfacial charge, spin–electromagnetic (EM coupling and exchange coupling, etc. In this review, we focus on voltage control of ferromagnetic resonance (FMR in multiferroics. ME coupling-induced FMR change is critical in microwave devices, where the electric field tuning of magnetic effective anisotropic field determines the tunability of the performance of microwave devices. Experimentally, FMR measurement technique is also an important method to determine the small effective magnetic field change in small amount of magnetic material precisely due to its high sensitivity and to reveal the deep science of multiferroics, especially, voltage control of magnetism in novel mechanisms like interfacial charge, spin–EM coupling and exchange coupling.

  9. High voltage MOSFET switching circuit

    Science.gov (United States)

    McEwan, Thomas E.

    1994-01-01

    The problem of source lead inductance in a MOSFET switching circuit is compensated for by adding an inductor to the gate circuit. The gate circuit inductor produces an inductive spike which counters the source lead inductive drop to produce a rectangular drive voltage waveform at the internal gate-source terminals of the MOSFET.

  10. A new parallel algorithm for simulation of spin glasses on scales of space-time periods of external fields with consideration of relaxation effects

    International Nuclear Information System (INIS)

    Gevorkyan, A.S.; Abajyan, H.G.

    2011-01-01

    We have investigated the statistical properties of an ensemble of disordered 1D spatial spin chains (SSCs) of finite length, placed in an external field, with consideration of relaxation effects. The short-range interaction complex-classical Hamiltonian was first used for solving this problem. A system of recurrent equations is obtained on the nodes of the spin-chain lattice. An efficient mathematical algorithm is developed on the basis of these equations with consideration of the advanced Sylvester conditions which allow step by step construct a huge number of stable spin chains in parallel. The distribution functions of different parameters of spin-glass system are constructed from the first principles of the complex classical mechanics by analyzing the calculation results of the 1D SSCs ensemble. It is shown that the behavior of the parameter distributions is quite different depending on the external fields. The energy ensembles and constants of spin-spin interactions are changed smoothly depending on the external field in the limit of statistical equilibrium, while some of them such as the mean value of polarizations of ensemble and parameters of its orderings are frustrated. We have also studied some critical properties of the ensemble of such catastrophes in the Clausius-Mossotti equation depending on the value of the external field. We have shown that the generalized complex-classical approach excludes these catastrophes allowing one to organize continuous parallel computing on the whole region of values of the external field including critical points. A new representation of the partition function based on these investigations is suggested. As opposed to usual definition, this function is a complex one and its derivatives are everywhere defined, including critical points

  11. Intelligent voltage control in a DC micro-grid containing PV generation and energy storage

    OpenAIRE

    Rouzbehi, Kumars; Miranian, Arash; Candela García, José Ignacio; Luna Alloza, Álvaro; Rodríguez Cortés, Pedro

    2014-01-01

    This paper proposes an intelligent control scheme for DC voltage regulationin a DC micro-grid integrating photovoltaic (PV) generation, energy storage and electric loads. The maximum power generation of the PV panel is followed using the incremental conductance (IC) maximum power point tracking (MPPT) algorithm while a high-performance local linear controller (LLC)is developed for the DC voltage control in the micro-grid.The LLC, as a data-driven control strategy, controls the bidirectional c...

  12. Locational Pricing to Mitigate Voltage Problems Caused by High PV Penetration

    OpenAIRE

    Sam Weckx; Reinhilde D'hulst; Johan Driesen

    2015-01-01

    In this paper, a locational marginal pricing algorithm is proposed to control the voltage in unbalanced distribution grids. The increasing amount of photovoltaic (PV) generation installed in the grid may cause the voltage to rise to unacceptable levels during periods of low consumption. With locational prices, the distribution system operator can steer the reactive power consumption and active power curtailment of PV panels to guarantee a safe network operation. Flexible loads also respond to...

  13. Comparative study of 0° X-cut and Y + 36°-cut lithium niobate high-voltage sensing

    Science.gov (United States)

    Patel, N.; Branch, D. W.; Schamiloglu, E.; Cular, S.

    2015-08-01

    A comparison study between Y + 36° and 0° X-cut lithium niobate (LiNbO3) was performed to evaluate the influence of crystal cut on the acoustic propagation to realize a piezoelectric high-voltage sensor. The acoustic time-of-flight for each crystal cut was measured when applying direct current (DC), alternating current (AC), and pulsed voltages. Results show that the voltage-induced shift in the acoustic wave propagation time scaled quadratically with voltage for DC and AC voltages applied to X-cut crystals. For the Y + 36° crystal, the voltage-induced shift scales linearly with DC voltages and quadratically with AC voltages. When applying 5 μs voltage pulses to both crystals, the voltage-induced shift scaled linearly with voltage. For the Y + 36° cut, the voltage-induced shift from applying DC voltages ranged from 10 to 54 ps and 35 to 778 ps for AC voltages at 640 V over the frequency range of 100 Hz-100 kHz. Using the same conditions as the Y + 36° cut, the 0° X-cut crystal sensed a shift of 10-273 ps for DC voltages and 189-813 ps for AC voltage application. For 5 μs voltage pulses, the 0° X-cut crystal sensed a voltage induced shift of 0.250-2 ns and the Y + 36°-cut crystal sensed a time shift of 0.115-1.6 ns. This suggests a frequency sensitive response to voltage where the influence of the crystal cut was not a significant contributor under DC, AC, or pulsed voltage conditions. The measured DC data were compared to a 1-D impedance matrix model where the predicted incremental length changed as a function of voltage. When the voltage source error was eliminated through physical modeling from the uncertainty budget, the combined uncertainty of the sensor (within a 95% confidence interval) decreased to 0.0033% using a Y + 36°-cut crystal and 0.0032% using an X-cut crystal for all the voltage conditions used in this experiment.

  14. Comparative study of 0° X-cut and Y + 36°-cut lithium niobate high-voltage sensing

    International Nuclear Information System (INIS)

    Patel, N.; Branch, D. W.; Cular, S.; Schamiloglu, E.

    2015-01-01

    A comparison study between Y + 36° and 0° X-cut lithium niobate (LiNbO 3 ) was performed to evaluate the influence of crystal cut on the acoustic propagation to realize a piezoelectric high-voltage sensor. The acoustic time-of-flight for each crystal cut was measured when applying direct current (DC), alternating current (AC), and pulsed voltages. Results show that the voltage-induced shift in the acoustic wave propagation time scaled quadratically with voltage for DC and AC voltages applied to X-cut crystals. For the Y + 36° crystal, the voltage-induced shift scales linearly with DC voltages and quadratically with AC voltages. When applying 5 μs voltage pulses to both crystals, the voltage-induced shift scaled linearly with voltage. For the Y + 36° cut, the voltage-induced shift from applying DC voltages ranged from 10 to 54 ps and 35 to 778 ps for AC voltages at 640 V over the frequency range of 100 Hz–100 kHz. Using the same conditions as the Y + 36° cut, the 0° X-cut crystal sensed a shift of 10–273 ps for DC voltages and 189–813 ps for AC voltage application. For 5 μs voltage pulses, the 0° X-cut crystal sensed a voltage induced shift of 0.250–2 ns and the Y + 36°-cut crystal sensed a time shift of 0.115–1.6 ns. This suggests a frequency sensitive response to voltage where the influence of the crystal cut was not a significant contributor under DC, AC, or pulsed voltage conditions. The measured DC data were compared to a 1-D impedance matrix model where the predicted incremental length changed as a function of voltage. When the voltage source error was eliminated through physical modeling from the uncertainty budget, the combined uncertainty of the sensor (within a 95% confidence interval) decreased to 0.0033% using a Y + 36°-cut crystal and 0.0032% using an X-cut crystal for all the voltage conditions used in this experiment

  15. Empirical Verification of Fault Models for FPGAs Operating in the Subcritical Voltage Region

    DEFF Research Database (Denmark)

    Birklykke, Alex Aaen; Koch, Peter; Prasad, Ramjee

    2013-01-01

    We present a rigorous empirical study of the bit-level error behavior of field programmable gate arrays operating in the subcricital voltage region. This region is of significant interest as voltage-scaling under normal circumstances is halted by the first occurrence of errors. However, accurate...

  16. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  17. Symmetric low-voltage powering system for relativistic electronic devices

    International Nuclear Information System (INIS)

    Agafonov, A.V.; Lebedev, A.N.; Krastelev, E.G.

    2005-01-01

    A special driver for double-sided powering of relativistic magnetrons and several methods of localized electron flow forming in the interaction region of relativistic magnetrons are proposed and discussed. Two experimental installations are presented and discussed. One of them is designed for laboratory research and demonstration experiments at a rather low voltage. The other one is a prototype of a full-scale installation for an experimental research at relativistic levels of voltages on the microwave generation in the new integrated system consisting of a relativistic magnetron and symmetrical induction driver

  18. Copper wire theft and high voltage electrical burns

    Science.gov (United States)

    Francis, Eamon C; Shelley, Odhran P

    2014-01-01

    High voltage electrical burns are uncommon. However in the midst of our economic recession we are noticing an increasing number of these injuries. Copper wire is a valuable commodity with physical properties as an excellent conductor of electricity making it both ubiquitous in society and prized on the black market. We present two consecutive cases referred to the National Burns Unit who sustained life threatening injuries from the alleged theft of high voltage copper wire and its omnipresence on an international scale. PMID:25356371

  19. Modeling generalized interline power-flow controller (GIPFC using 48-pulse voltage source converters

    Directory of Open Access Journals (Sweden)

    Amir Ghorbani

    2018-05-01

    Full Text Available Generalized interline power-flow controller (GIPFC is one of the voltage-source controller (VSC-based flexible AC transmission system (FACTS controllers that can independently regulate the power-flow over each transmission line of a multiline system. This paper presents the modeling and performance analysis of GIPFC based on 48-pulsed voltage-source converters. This paper deals with a cascaded multilevel converter model, which is a 48-pulse (three levels voltage source converter. The voltage source converter described in this paper is a harmonic neutralized, 48-pulse GTO converter. The GIPFC controller is based on d-q orthogonal coordinates. The algorithm is verified using simulations in MATLAB/Simulink environment. Comparisons between unified power flow controller (UPFC and GIPFC are also included. Keywords: Generalized interline power-flow controller (GIPFC, Voltage source converter (VCS, 48-pulse GTO converter

  20. Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System

    Science.gov (United States)

    Agarwal, Ruchi; Singh, Sanjeev

    2017-12-01

    The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.

  1. Voltage profile program for the Kennedy Space Center electric power distribution system

    Science.gov (United States)

    1976-01-01

    The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.

  2. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  3. A new reconfiguration scheme for voltage stability enhancement of radial distribution systems

    International Nuclear Information System (INIS)

    Arun, M.; Aravindhababu, P.

    2009-01-01

    Network reconfiguration is an operation problem, which entails altering the topological structure of the distribution feeders by rearranging the status of switches in order to obtain an optimal configuration in order to minimise the system losses. This paper presents a new reconfiguration algorithm that enhances voltage stability and improves the voltage profile besides minimising losses without incurring any additional cost for installation of capacitors, tap changing transformers and related switching equipment in the distribution system. Test results on a 69 node distribution system reveal the superiority of this algorithm.

  4. New MMC capacitor voltage balancing using sorting-less strategy in nearest level control

    DEFF Research Database (Denmark)

    Ricco, Mattia; Máthé, Lászlo; Teodorescu, Remus

    2016-01-01

    This paper proposes a new strategy for balancing the Capacitor Voltages (CVs) for Modular Multilevel Converters (MMCs). The balancing is one of the main challenges in MMC applications and it is usually solved by adopting a global arm control approach. For performing such an approach, a sorted list...... of the SubModules (SMs) according to their capacitor voltages is required. A common way to accomplish this task is to implement a sorting algorithm in the same controller used for the modulation technique. However, the execution time and the computational efforts of these kinds of algorithms increase very...

  5. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  6. Analytical solution of the PNP equations at AC applied voltage

    International Nuclear Information System (INIS)

    Golovnev, Anatoly; Trimper, Steffen

    2012-01-01

    A symmetric binary polymer electrolyte subjected to an AC voltage is considered. The analytical solution of the Poisson–Nernst–Planck equations (PNP) is found and analyzed for small applied voltages. Three distinct time regimes offering different behavior can be discriminated. The experimentally realized stationary behavior is discussed in detail. An expression for the external current is derived. Based on the theoretical result a simple method is suggested of measuring the ion mobility and their concentration separately. -- Highlights: ► Analytical solution of Poisson–Nernst–Planck equations. ► Binary polymer electrolyte subjected to an external AC voltage. ► Three well separated time scales exhibiting different behavior. ► The experimentally realized stationary behavior is discussed in detail. ► A method is proposed measuring the mobility and the concentration separately.

  7. Characterization of chaotic electroconvection near flat electrodes under oscillatory voltages

    Science.gov (United States)

    Kim, Jeonglae; Davidson, Scott; Mani, Ali

    2017-11-01

    Onset of hydrodynamic instability and chaotic electroconvection in aqueous systems are studied by directly solving the two-dimensional coupled Poisson-Nernst-Planck and Navier-Stokes equations. An aqueous binary electrolyte is bounded by two planar electrodes where time-harmonic voltage is applied at a constant oscillation frequency. The governing equations are solved using a fully-conservative second-order-accurate finite volume discretization and a second-order implicit Euler time advancement. At a sufficiently high amplitude of applied voltage, the system exhibits chaotic behaviors involving strong hydrodynamic mixing and enhanced electroconvection. The system responses are characterized as a function of oscillation frequency, voltage magnitude, and the ratio of diffusivities of two ion species. Our results indicate that electroconvection is most enhanced for frequencies on the order of inverse system RC time scale. We will discuss the dependence of this optimal frequency on the asymmetry of the diffusion coefficients of ionic species. Supported by the Stanford's Precourt Institute.

  8. Inductive voltage adder (IVA) for submillimeter radius electron beam

    International Nuclear Information System (INIS)

    Mazarakis, M.G.; Poukey, J.W.; Maenchen, J.E.

    1996-01-01

    The authors have already demonstrated the utility of inductive voltage adder accelerators for production of small-size electron beams. In this approach, the inductive voltage adder drives a magnetically immersed foilless diode to produce high-energy (10--20 MeV), high-brightness pencil electron beams. This concept was first demonstrated with the successful experiments which converted the linear induction accelerator RADLAC II into an IVA fitted with a small 1-cm radius cathode magnetically immersed foilless diode (RADLAC II/SMILE). They present here first validations of extending this idea to mm-scale electron beams using the SABRE and HERMES-III inductive voltage adders as test beds. The SABRE experiments are already completed and have produced 30-kA, 9-MeV electron beams with envelope diameter of 1.5-mm FWHM. The HERMES-III experiments are currently underway

  9. High-voltage test stand at Livermore

    International Nuclear Information System (INIS)

    Smith, M.E.

    1977-01-01

    This paper describes the present design and future capability of the high-voltage test stand for neutral-beam sources at Lawrence Livermore Laboratory. The stand's immediate use will be for testing the full-scale sources (120 kV, 65 A) for the Tokamak Fusion Test Reactor. It will then be used to test parts of the sustaining source system (80 kV, 85 A) being designed for the Magnetic Fusion Test Facility. Following that will be an intensive effort to develop beams of up to 200 kV at 20 A by accelerating negative ions. The design of the test stand features a 5-MVA power supply feeding a vacuum tetrode that is used as a switch and regulator. The 500-kW arc supply and the 100-kW filament supply for the neutral-beam source are battery powered, thus eliminating one or two costly isolation transformers

  10. Optimally stopped variational quantum algorithms

    Science.gov (United States)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  11. A Multi Time Scale Wind Power Forecasting Model of a Chaotic Echo State Network Based on a Hybrid Algorithm of Particle Swarm Optimization and Tabu Search

    Directory of Open Access Journals (Sweden)

    Xiaomin Xu

    2015-11-01

    Full Text Available The uncertainty and regularity of wind power generation are caused by wind resources’ intermittent and randomness. Such volatility brings severe challenges to the wind power grid. The requirements for ultrashort-term and short-term wind power forecasting with high prediction accuracy of the model used, have great significance for reducing the phenomenon of abandoned wind power , optimizing the conventional power generation plan, adjusting the maintenance schedule and developing real-time monitoring systems. Therefore, accurate forecasting of wind power generation is important in electric load forecasting. The echo state network (ESN is a new recurrent neural network composed of input, hidden layer and output layers. It can approximate well the nonlinear system and achieves great results in nonlinear chaotic time series forecasting. Besides, the ESN is simpler and less computationally demanding than the traditional neural network training, which provides more accurate training results. Aiming at addressing the disadvantages of standard ESN, this paper has made some improvements. Combined with the complementary advantages of particle swarm optimization and tabu search, the generalization of ESN is improved. To verify the validity and applicability of this method, case studies of multitime scale forecasting of wind power output are carried out to reconstruct the chaotic time series of the actual wind power generation data in a certain region to predict wind power generation. Meanwhile, the influence of seasonal factors on wind power is taken into consideration. Compared with the classical ESN and the conventional Back Propagation (BP neural network, the results verify the superiority of the proposed method.

  12. Analyzing randomly occurring voltage breakdowns

    International Nuclear Information System (INIS)

    Wiltshire, C.W.

    1977-01-01

    During acceptance testing of high-vacuum neutron tubes, 40% of the tubes failed after experiencing high-voltage breakdowns during the aging process. Use of a digitizer in place of an oscilloscope revealed two types of breakdowns, only one of which affected acceptance testing. This information allowed redesign of the aging sequence to prevent tube damage and improve yield and quality of the final product

  13. Advances in high voltage engineering

    CERN Document Server

    Haddad, A

    2005-01-01

    This book addresses the very latest research and development issues in high voltage technology and is intended as a reference source for researchers and students in the field, specifically covering developments throughout the past decade. This unique blend of expert authors and comprehensive subject coverage means that this book is ideally suited as a reference source for engineers and academics in the field for years to come.

  14. High-voltage CMOS detectors

    International Nuclear Information System (INIS)

    Ehrler, F.; Blanco, R.; Leys, R.; Perić, I.

    2016-01-01

    High-voltage CMOS (HVCMOS) pixel sensors are depleted active pixel sensors implemented in standard commercial CMOS processes. The sensor element is the n-well/p-substrate diode. The sensor electronics are entirely placed inside the n-well which is at the same time used as the charge collection electrode. High voltage is used to deplete the part of the substrate around the n-well. HVCMOS sensors allow implementation of complex in-pixel electronics. This, together with fast signal collection, allows a good time resolution, which is required for particle tracking in high energy physics. HVCMOS sensors will be used in Mu3e experiment at PSI and are considered as an option for both ATLAS and CLIC (CERN). Radiation tolerance and time walk compensation have been tested and results are presented. - Highlights: • High-voltage CMOS sensors will be used in Mu3e experiment at PSI (Switzerland). • HVCMOS sensors are considered as an option for ATLAS (LHC/CERN) and CLIC (CERN). • Efficiency of more than 95% (99%) has been measured with (un-)irradiated chips. • The time resolution measured in the beam tests is nearly 100 ns. • We plan to improve time resolution and efficiency by using high-resistive substrate.

  15. Low voltage electron beam accelerators

    International Nuclear Information System (INIS)

    Ochi, Masafumi

    2003-01-01

    Widely used electron accelerators in industries are the electron beams with acceleration voltage at 300 kV or less. The typical examples are shown on manufactures in Japan, equipment configuration, operation, determination of process parameters, and basic maintenance requirement of the electron beam processors. New electron beam processors with acceleration voltage around 100 kV were introduced maintaining the relatively high dose speed capability of around 10,000 kGy x mpm at production by ESI (Energy Science Inc. USA, Iwasaki Electric Group). The application field like printing and coating for packaging requires treating thickness of 30 micron or less. It does not require high voltage over 110 kV. Also recently developed is a miniature bulb type electron beam tube with energy less than 60 kV. The new application area for this new electron beam tube is being searched. The drive force of this technology to spread in the industries would be further development of new application, process and market as well as the price reduction of the equipment, upon which further acknowledgement and acceptance of the technology to societies and industries would entirely depend. (Y. Tanaka)

  16. High-voltage CMOS detectors

    Energy Technology Data Exchange (ETDEWEB)

    Ehrler, F., E-mail: felix.ehrler@student.kit.edu; Blanco, R.; Leys, R.; Perić, I.

    2016-07-11

    High-voltage CMOS (HVCMOS) pixel sensors are depleted active pixel sensors implemented in standard commercial CMOS processes. The sensor element is the n-well/p-substrate diode. The sensor electronics are entirely placed inside the n-well which is at the same time used as the charge collection electrode. High voltage is used to deplete the part of the substrate around the n-well. HVCMOS sensors allow implementation of complex in-pixel electronics. This, together with fast signal collection, allows a good time resolution, which is required for particle tracking in high energy physics. HVCMOS sensors will be used in Mu3e experiment at PSI and are considered as an option for both ATLAS and CLIC (CERN). Radiation tolerance and time walk compensation have been tested and results are presented. - Highlights: • High-voltage CMOS sensors will be used in Mu3e experiment at PSI (Switzerland). • HVCMOS sensors are considered as an option for ATLAS (LHC/CERN) and CLIC (CERN). • Efficiency of more than 95% (99%) has been measured with (un-)irradiated chips. • The time resolution measured in the beam tests is nearly 100 ns. • We plan to improve time resolution and efficiency by using high-resistive substrate.

  17. Low voltage electron beam accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Ochi, Masafumi [Iwasaki Electric Co., Ltd., Tokyo (Japan)

    2003-02-01

    Widely used electron accelerators in industries are the electron beams with acceleration voltage at 300 kV or less. The typical examples are shown on manufactures in Japan, equipment configuration, operation, determination of process parameters, and basic maintenance requirement of the electron beam processors. New electron beam processors with acceleration voltage around 100 kV were introduced maintaining the relatively high dose speed capability of around 10,000 kGy x mpm at production by ESI (Energy Science Inc. USA, Iwasaki Electric Group). The application field like printing and coating for packaging requires treating thickness of 30 micron or less. It does not require high voltage over 110 kV. Also recently developed is a miniature bulb type electron beam tube with energy less than 60 kV. The new application area for this new electron beam tube is being searched. The drive force of this technology to spread in the industries would be further development of new application, process and market as well as the price reduction of the equipment, upon which further acknowledgement and acceptance of the technology to societies and industries would entirely depend. (Y. Tanaka)

  18. Light-voltage conversion apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Fujioka, Yoshiki

    1987-09-19

    In a light-voltage conversion unit, when input signal is applied, the output signal to the control circuit has quick rise-up time and slow breaking time. In order to improve this, a short-circuit transistor is placed at the diode, and this transistor is forced ON, when an output signal to the control circuit is lowered down to a constant voltage, to short-circuit between the output terminals. This, however, has a demerit of high power consumption by a transistor. In this invention, by connecting a light-emitting element which gets ON at the first transition and a light-emitting element which gets ON at the last transition, placing a light receiving element in front of each light-emitting element, when an input signal is applied; thus a load is driven only with ON signal of each light-emitting element, eliminating the delay in the last transition. All of these give a quick responsive light-voltage conversion without unnecessary power consumption. (5 figs)

  19. Project resumes: biological effects from electric fields associated with high-voltage transmission lines

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-01-01

    Abstracts of research projects are presented in the following areas: measurements and special facilities; cellular and subcellular studies; physiology; behavior; environmental effects; modeling, scaling and dosimetry; and high voltage direct current. (ACR)

  20. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  1. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  2. [Development of residual voltage testing equipment].

    Science.gov (United States)

    Zeng, Xiaohui; Wu, Mingjun; Cao, Li; He, Jinyi; Deng, Zhensheng

    2014-07-01

    For the existing measurement methods of residual voltage which can't turn the power off at peak voltage exactly and simultaneously display waveforms, a new residual voltage detection method is put forward in this paper. First, the zero point of the power supply is detected with zero cross detection circuit and is inputted to a single-chip microcomputer in the form of pulse signal. Secend, when the zero point delays to the peak voltage, the single-chip microcomputer sends control signal to power off the relay. At last, the waveform of the residual voltage is displayed on a principal computer or oscilloscope. The experimental results show that the device designed in this paper can turn the power off at peak voltage and is able to accurately display the voltage waveform immediately after power off and the standard deviation of the residual voltage is less than 0.2 V at exactly one second and later.

  3. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  4. Symmetric voltage-controlled variable resistance

    Science.gov (United States)

    Vanelli, J. C.

    1978-01-01

    Feedback network makes resistance of field-effect transistor (FET) same for current flowing in either direction. It combines control voltage with source and load voltages to give symmetric current/voltage characteristics. Since circuit produces same magnitude output voltage for current flowing in either direction, it introduces no offset in presense of altering polarity signals. It is therefore ideal for sensor and effector circuits in servocontrol systems.

  5. Ultra Low-Voltage Energy Harvesting

    Science.gov (United States)

    2013-09-01

    if in a solar battery charger the level of illumination were to drop due to cloud cover, the diode would prevent discharging of the battery when...the source voltage becomes lower than battery voltage. The drawback of a simple circuit like this is that once the source voltage is lower than the...longer charged when the battery voltage is above the OV setting. Figure 13. Block diagram of BQ25504 circuit . (From [10]) 18 THIS PAGE

  6. Automatic Voltage Control (AVC) of Danish Transmission System - Concept design

    DEFF Research Database (Denmark)

    Qin, Nan; Abildgaard, Hans; Lund, P.

    2014-01-01

    For more than 20 years it has been a consistent plan by all Danish governments to turn the Danish power production away from fossil fuels towards renewable energy. The result today is that 37% of the total Danish power consumption was covered by mainly wind energy in 2013 aiming at 50% by 2020......, objectives, constraints, algorithms for optimal power flow and some special functions in particular systems, which inspires the concept design of a Danish AVC system to address the future challenges of voltage control. In the concept, the Danish AVC design is based on a centralized control scheme. All...... the substation loses the telecommunications to the control center. RPCs will be integrated to the AVC system as normative regulators in the later stage. Distributed generation units can be organized as virtual power plants and participate in voltage control at transmission level. Energinet.dk as the Danish TSO...

  7. Voltage Quality of Grid Connected Wind Turbines

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Sun, Tao

    2004-01-01

    Grid connected wind turbines may cause quality problems, such as voltage variation and flicker. This paper discusses the voltage variation and flicker emission of grid connected wind turbines with doubly-fed induction generators. A method to compensate flicker by using a voltage source converter...

  8. Manufacturing technology for practical Josephson voltage normals

    International Nuclear Information System (INIS)

    Kohlmann, Johannes; Kieler, Oliver

    2016-01-01

    In this contribution we present the manufacturing technology for the fabrication of integrated superconducting Josephson serial circuits for voltage normals. First we summarize some foundations for Josephson voltage normals and sketch the concept and the setup of the circuits, before we describe the manufacturing technology form modern practical Josephson voltage normals.

  9. 49 CFR 234.221 - Lamp voltage.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Lamp voltage. 234.221 Section 234.221 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION..., Inspection, and Testing Maintenance Standards § 234.221 Lamp voltage. The voltage at each lamp shall be...

  10. Bootstrapped Low-Voltage Analog Switches

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jesper

    1999-01-01

    Novel low-voltage constant-impedance analog switch circuits are proposed. The switch element is a single MOSFET, and constant-impedance operation is obtained using simple circuits to adjust the gate and bulk voltages relative to the switched signal. Low-voltage (1-volt) operation is made feasible...

  11. Voltage generators of high voltage high power accelerators

    International Nuclear Information System (INIS)

    Svinin, M.P.

    1981-01-01

    High voltage electron accelerators are widely used in modern radiation installations for industrial purposes. In the near future further increasing of their power may be effected, which enables to raise the efficiency of the radiation processes known and to master new power-consuming production in industry. Improvement of HV generators by increasing their power and efficiency is one of many scientific and engineering aspects the successful solution of which provides further development of these accelerators and their technical parameters. The subject is discussed in detail. (author)

  12. Application of active electrode compensation to perform continuous voltage-clamp recordings with sharp microelectrodes.

    Science.gov (United States)

    Gómez-González, J F; Destexhe, A; Bal, T

    2014-10-01

    Electrophysiological recordings of single neurons in brain tissues are very common in neuroscience. Glass microelectrodes filled with an electrolyte are used to impale the cell membrane in order to record the membrane potential or to inject current. Their high resistance induces a high voltage drop when passing current and it is essential to correct the voltage measurements. In particular, for voltage clamping, the traditional alternatives are two-electrode voltage-clamp technique or discontinuous single electrode voltage-clamp (dSEVC). Nevertheless, it is generally difficult to impale two electrodes in a same neuron and the switching frequency is limited to low frequencies in the case of dSEVC. We present a novel fully computer-implemented alternative to perform continuous voltage-clamp recordings with a single sharp-electrode. To reach such voltage-clamp recordings, we combine an active electrode compensation algorithm (AEC) with a digital controller (AECVC). We applied two types of control-systems: a linear controller (proportional plus integrative controller) and a model-based controller (optimal control). We compared the performance of the two methods to dSEVC using a dynamic model cell and experiments in brain slices. The AECVC method provides an entirely digital method to perform continuous recording and smooth switching between voltage-clamp, current clamp or dynamic-clamp configurations without introducing artifacts.

  13. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  14. Voltage Management in Unbalanced Low Voltage Networks Using a Decoupled Phase-Tap-Changer Transformer

    DEFF Research Database (Denmark)

    Coppo, Massimiliano; Turri, Roberto; Marinelli, Mattia

    2014-01-01

    The paper studies a medium voltage-low voltage transformer with a decoupled on load tap changer capability on each phase. The overall objective is the evaluation of the potential benefits on a low voltage network of such possibility. A realistic Danish low voltage network is used for the analysis...

  15. 76 FR 70721 - Voltage Coordination on High Voltage Grids; Notice of Staff Workshop

    Science.gov (United States)

    2011-11-15

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. AD12-5-000] Voltage Coordination on High Voltage Grids; Notice of Staff Workshop Take notice that the Federal Energy Regulatory Commission will hold a Workshop on Voltage Coordination on High Voltage Grids on Thursday, December 1, 2011...

  16. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  17. Piezo Voltage Controlled Planar Hall Effect Devices.

    Science.gov (United States)

    Zhang, Bao; Meng, Kang-Kang; Yang, Mei-Yin; Edmonds, K W; Zhang, Hao; Cai, Kai-Ming; Sheng, Yu; Zhang, Nan; Ji, Yang; Zhao, Jian-Hua; Zheng, Hou-Zhi; Wang, Kai-You

    2016-06-22

    The electrical control of the magnetization switching in ferromagnets is highly desired for future spintronic applications. Here we report on hybrid piezoelectric (PZT)/ferromagnetic (Co2FeAl) devices in which the planar Hall voltage in the ferromagnetic layer is tuned solely by piezo voltages. The change of planar Hall voltage is associated with magnetization switching through 90° in the plane under piezo voltages. Room temperature magnetic NOT and NOR gates are demonstrated based on the piezo voltage controlled Co2FeAl planar Hall effect devices without the external magnetic field. Our demonstration may lead to the realization of both information storage and processing using ferromagnetic materials.

  18. Capacitor Voltages Measurement and Balancing in Flying Capacitor Multilevel Converters Utilizing a Single Voltage Sensor

    DEFF Research Database (Denmark)

    Farivar, Glen; Ghias, Amer M. Y. M.; Hredzak, Branislav

    2017-01-01

    This paper proposes a new method for measuring capacitor voltages in multilevel flying capacitor (FC) converters that requires only one voltage sensor per phase leg. Multiple dc voltage sensors traditionally used to measure the capacitor voltages are replaced with a single voltage sensor at the ac...... side of the phase leg. The proposed method is subsequently used to balance the capacitor voltages using only the measured ac voltage. The operation of the proposed measurement and balancing method is independent of the number of the converter levels. Experimental results presented for a five-level FC...

  19. Voltage-Gated Calcium Channels

    Science.gov (United States)

    Zamponi, Gerald Werner

    Voltage Gated Calcium Channels is the first comprehensive book in the calcium channel field, encompassing over thirty years of progress towards our understanding of calcium channel structure, function, regulation, physiology, pharmacology, and genetics. This book balances contributions from many of the leading authorities in the calcium channel field with fresh perspectives from risings stars in the area, taking into account the most recent literature and concepts. This is the only all-encompassing calcium channel book currently available, and is an essential resource for academic researchers at all levels in the areas neuroscience, biophysics, and cardiovascular sciences, as well as to researchers in the drug discovery area.

  20. Searching for Plausible N-k Contingencies Endangering Voltage Stability

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Van Cutsem, Thierry

    2017-01-01

    This paper presents a novel search algorithm using time-domain simulations to identify plausible N − k contingencies endangering voltage stability. Starting from an initial list of disturbances, progressively more severe contingencies are investigated. After simulation of a N − k contingency......, the simulation results are assessed. If the system response is unstable, a plausible harmful contingency sequence has been found. Otherwise, components affected by the contingencies are considered as candidate next event leading to N − (k + 1) contingencies. This implicitly takes into account hidden failures...