Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors
Okuma, Takanori; Yasuura, Hiroto
2001-01-01
This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...
MPPT algorithm for voltage controlled PV inverters
DEFF Research Database (Denmark)
Kerekes, Tamas; Teodorescu, Remus; Liserre, Marco;
2008-01-01
This paper presents a novel concept for an MPPT that can be used in case of a voltage controlled grid connected PV inverters. In case of single-phase systems, the 100 Hz ripple in the AC power is also present on the DC side. Depending on the DC link capacitor, this power fluctuation can be used t...... to track the MPP of the PV array, using the information that at MPP the power oscillations are very small. In this way the algorithm can detect the fact that the current working point is at the MPP, for the current atmospheric conditions....
Comparison of Algorithms for Control of Loads for Voltage Regulation
DEFF Research Database (Denmark)
Douglass, Philip James; Han, Xue; You, Shi
2014-01-01
Autonomous flexible loads can be utilized to regulate voltag e on low voltage feeders. This paper compares two algorithms for controllin g loads: a simple voltage droop, where load power consumption is a varied in proportio n to RMS voltage; and a normalized relative voltage droop, which modifies...... the simpl e voltage droop by subtracting the mean voltage value at the bus and dividing by the standard deviation. These two controllers are applied to hot water heaters simul ated in a simple residential feeder. The simulation results show that both controllers r educe the frequency of undervoltage events...
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Large scale tracking algorithms.
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud
Directory of Open Access Journals (Sweden)
A. Paulin Florence
2016-01-01
Full Text Available Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.
Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.
Florence, A Paulin; Shanthi, V; Simon, C B Sunil
2016-01-01
Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.
Reduced Voltage Scaling in Clock Distribution Networks
Directory of Open Access Journals (Sweden)
Khader Mohammad
2009-01-01
Full Text Available We propose a novel circuit technique to generate a reduced voltage swing (RVS signals for active power reduction on main buses and clocks. This is achieved without performance degradation, without extra power supply requirement, and with minimum area overhead. The technique stops the discharge path on the net that is swinging low at a certain voltage value. It reduces active power on the target net by as much as 33% compared to traditional full swing signaling. The logic 0 voltage value is programmable through control bits. If desired, the reduced-swing mode can also be disabled. The approach assumes that the logic 0 voltage value is always less than the threshold voltage of the nMOS receivers, which eliminate the need of the low to high voltage translation. The reduced noise margin and the increased leakage on the receiver transistors using this approach have been addressed through the selective usage of multithreshold voltage (MTV devices and the programmability of the low voltage value.
A new way of estimating compute-boundedness and its application to dynamic voltage scaling
DEFF Research Database (Denmark)
Venkatachalam, Vasanth; Franz, Michael; Probst, Christian W.
2007-01-01
Many dynamic voltage scaling algorithms rely on measuring hardware events (such as cache misses) for predicting how much a workload can be slowed down with acceptable performance loss. The events measured, however, are at best indirectly related to execution time and clock frequency. By relating...
Variation-aware adaptive voltage scaling for digital CMOS circuits
Wirnshofer, Martin
2013-01-01
Increasing performance demands in integrated circuits, together with limited energy budgets, force IC designers to find new ways of saving power. One innovative way is the presented adaptive voltage scaling scheme, which tunes the supply voltage according to the present process, voltage and temperature variations as well as aging. The voltage is adapted “on the fly” by means of in-situ delay monitors to exploit unused timing margin, produced by state-of-the-art worst-case designs. This book discusses the design of the enhanced in-situ delay monitors and the implementation of the complete control-loop comprising the monitors, a control-logic and an on-chip voltage regulator. An analytical Markov-based model of the control-loop is derived to analyze its robustness and stability. Variation-Aware Adaptive Voltage Scaling for Digital CMOS Circuits provides an in-depth assessment of the proposed voltage scaling scheme when applied to an arithmetic and an image processing circuit. This book is written for engine...
Reproducible and controllable induction voltage adder for scaled beam experiments
Sakai, Yasuo; Nakajima, Mitsuo; Horioka, Kazuhiko
2016-08-01
A reproducible and controllable induction adder was developed using solid-state switching devices and Finemet cores for scaled beam compression experiments. A gate controlled MOSFET circuit was developed for the controllable voltage driver. The MOSFET circuit drove the induction adder at low magnetization levels of the cores which enabled us to form reproducible modulation voltages with jitter less than 0.3 ns. Preliminary beam compression experiments indicated that the induction adder can improve the reproducibility of modulation voltages and advance the beam physics experiments.
Energy reduction through voltage scaling and lightweight checking
Kadric, Edin
As the semiconductor roadmap reaches smaller feature sizes and the end of Dennard Scaling, design goals change, and managing the power envelope often dominates delay minimization. Voltage scaling remains a powerful tool to reduce energy. We find that it results in about 60% geomean energy reduction on top of other common low-energy optimizations with 22nm CMOS technology. However, when voltage is reduced, it becomes easier for noise and particle strikes to upset a node, potentially causing Silent Data Corruption (SDC). The 60% energy reduction, therefore, comes with a significant drop in reliability. Duplication with checking and triple-modular redundancy are traditional approaches used to combat transient errors, but spending 2--3x the energy for redundant computation can diminish or reverse the benefits of voltage scaling. As an alternative, we explore the opportunity to use checking operations that are cheaper than the base computation they are guarding. We devise a classification system for applications and their lightweight checking characteristics. In particular, we identify and evaluate the effectiveness of lightweight checks in a broad set of common tasks in scientific computing and signal processing. We find that the lightweight checks cost only a fraction of the base computation (0-25%) and allow us to recover the reliability losses from voltage scaling. Overall, we show about 50% net energy reduction without compromising reliability compared to operation at the nominal voltage. We use FPGAs (Field-Programmable Gate Arrays) in our work, although the same ideas can be applied to different systems. On top of voltage scaling, we explore other common low-energy techniques for FPGAs: transmission gates, gate boosting, power gating, low-leakage (high-Vth) processes, and dual-V dd architectures. We do not scale voltage for memories, so lower voltages help us reduce logic and interconnect energy, but not memory energy. At lower voltages, memories become dominant
Directory of Open Access Journals (Sweden)
G. AZHAGUNILA,
2011-02-01
Full Text Available The main aim of this work is to develop a Dynamic Voltage Scaling (DVS algorithm for real- time system with resource constraints and the system thus developed is fault tolerant as well. The system is assumed to contain independent periodic tasks. Earliest Deadline Firstscheduling algorithm is considered in this. The algorithm helps in meeting the deadlines of all the tasks and also ensures that the total power consumption is minimized. The other objective is to develop a fault tolerant system. The proposed system is designed to handle hardware faults. Thus the proposed system is energy efficient and reliable.
Institute of Scientific and Technical Information of China (English)
SUYajuan; WEIShaojun
2005-01-01
Technique of energy minimization by combining Dynamic voltage scheduling (DVS) and Adaptive body biasing voltage (ABB) method for distributed realtime system at design level is proposed. First, a simplified energy optimizing model is illustrated where the supply voltage or body biasing voltage is kept as constant according to each separated frequency region, thus calculation of exceeding equation is avoided. Divergence of simplified and analytic model within 5% indicates the accuracy of this model. Based on it, the proposed approach named LEVVS (Low energy supply voltage and body biasing voltage scheduling algorithm) explores space of minimizing energy consumption by finding optimal trade-off between dynamic and static energy. The corresponding optimal supply voltage and body biasing voltage are determined by an iterative method in which the supply voltage and body biasing voltage of tasks are adjusted according to the value of energy latency differential coefficient of each task and slack time distribution of the system. Experiments show that using LEVVS approach, 51% more average energy reduction can be obtained than employing DVS method alone. Furthermore the effects of switch capacitance and global slack on the energy saving efficiency of LEVVS are investigated. The smaller the global slack or average switch capacitance is, the more the energy saving of LEVVS compared with DVS is.
Voltage Impacts of Utility-Scale Distributed Wind
Energy Technology Data Exchange (ETDEWEB)
Allen, A.
2014-09-01
Although most utility-scale wind turbines in the United States are added at the transmission level in large wind power plants, distributed wind power offers an alternative that could increase the overall wind power penetration without the need for additional transmission. This report examines the distribution feeder-level voltage issues that can arise when adding utility-scale wind turbines to the distribution system. Four of the Pacific Northwest National Laboratory taxonomy feeders were examined in detail to study the voltage issues associated with adding wind turbines at different distances from the sub-station. General rules relating feeder resistance up to the point of turbine interconnection to the expected maximum voltage change levels were developed. Additional analysis examined line and transformer overvoltage conditions.
Kalman plus weights: a time scale algorithm
Greenhall, C. A.
2001-01-01
KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.
DEFF Research Database (Denmark)
Zhao, Xin; Meng, Lexuan; Savaghebi, Mehdi
2016-01-01
of the converters cannot be equally shared if no extra current balancing loop is added. Accordingly, a dynamic consensus algorithm (DCA) based negative/positive sequence current sharing scheme is proposed in this paper. Finally, a lab-scale AC microgrid was designed and tested in the lab to validate the feasibility...... control based voltage support strategy has been proposed to aid MGs riding through three phase asymmetrical voltage sags. However, since the line impedance from each converter to the point of common coupling (PCC) is not identical, both positive sequence and negative sequence output current...
Genetic Algorithm-Based Artificial Neural Network for Voltage Stability Assessment
Directory of Open Access Journals (Sweden)
Garima Singh
2011-01-01
Full Text Available With the emerging trend of restructuring in the electric power industry, many transmission lines have been forced to operate at almost their full capacities worldwide. Due to this, more incidents of voltage instability and collapse are being observed throughout the world leading to major system breakdowns. To avoid these undesirable incidents, a fast and accurate estimation of voltage stability margin is required. In this paper, genetic algorithm based back propagation neural network (GABPNN has been proposed for voltage stability margin estimation which is an indication of the power system's proximity to voltage collapse. The proposed approach utilizes a hybrid algorithm that integrates genetic algorithm and the back propagation neural network. The proposed algorithm aims to combine the capacity of GAs in avoiding local minima and at the same time fast execution of the BP algorithm. Input features for GABPNN are selected on the basis of angular distance-based clustering technique. The performance of the proposed GABPNN approach has been compared with the most commonly used gradient based BP neural network by estimating the voltage stability margin at different loading conditions in 6-bus and IEEE 30-bus system. GA based neural network learns faster, at the same time it provides more accurate voltage stability margin estimation as compared to that based on BP algorithm. It is found to be suitable for online applications in energy management systems.
Toward the Optimal Configuration of Dynamic Voltage Scaling Points in Real-Time Applications
Institute of Scientific and Technical Information of China (English)
Hui-Zhan Yi; Xue-Jun Yang
2006-01-01
In real-time applications, compiler-directed dynamic voltage scaling (DVS) could reduce energy consumption efficiently, where compiler put voltage scaling points in the proper places, and the supply voltage and clock frequency were adjusted to the relationship between the reduced time and the reduced workload. This paper presents the optimal configuration of dynamic voltage scaling points without voltage scaling overhead, which minimizes energy consumption. The conclusion is proved theoretically. Finally, it is confirmed by simulations with equally-spaced voltage scaling configuration.
Solar Load Voltage Tracking for Water Pumping: An Algorithm
Kappali, M.; Udayakumar, R. Y.
2014-07-01
Maximum power is to be harnessed from solar photovoltaic (PV) panel to minimize the effective cost of solar energy. This is accomplished by maximum power point tracking (MPPT). There are different methods to realise MPPT. This paper proposes a simple algorithm to implement MPPT lv method in a closed loop environment for centrifugal pump driven by brushed PMDC motor. Simulation testing of the algorithm is done and the results are found to be encouraging and supportive of the proposed method MPPT lv .
Directory of Open Access Journals (Sweden)
V. Tamilselvan
2015-05-01
Full Text Available This study addresses a shuffled frog leaping algorithm for solving the multi-objective reactive power dispatch problem in a power system. Optimal Reactive Power Dispatch (ORPD is formulated as a nonlinear, multi-modal and mixed-variable problem. The intended technique is based on the minimization of the real power loss, minimization of voltage deviation and maximization of the voltage stability margin. Generator voltages, capacitor banks and tap positions of tap changing transformers are used as optimization variables of this problem. A memetic meta-heuristic named as shuffled frog-leaping algorithm is intended to solve multi-objective optimal reactive power dispatch problems considering voltage stability margin and voltage deviation. The Shuffled Frog-Leaping Algorithm (SFLA is a population-based cooperative search metaphor inspired by natural memetics. The algorithm contains elements of local search and global information exchange. The most important benefit of this algorithm is higher speed of convergence to a better solution. The intended method is applied to ORPD problem on IEEE 57 bus power systems and compared with two versions of differential evolutionary algorithm. The simulation results show the effectiveness of the intended method.
Dynamic Uniform Scaling for Multiobjective Genetic Algorithms
DEFF Research Database (Denmark)
Pedersen, Gerulf; Goldberg, David E.
2004-01-01
Before Multiobjective Evolutionary Algorithms (MOEAs) can be used as a widespread tool for solving arbitrary real world problems there are some salient issues which require further investigation. One of these issues is how a uniform distribution of solutions along the Pareto non-dominated front can......, the issue of obtaining a diverse set of solutions for badly scaled objective functions will be investigated and proposed solutions will be implemented using the NSGA-II algorithm....
Dynamic Uniform Scaling for Multiobjective Genetic Algorithms
DEFF Research Database (Denmark)
Pedersen, Gerulf; Goldberg, D.E.
2004-01-01
Before Multiobjective Evolutionary Algorithms (MOEAs) can be used as a widespread tool for solving arbitrary real world problems there are some salient issues which require further investigation. One of these issues is how a uniform distribution of solutions along the Pareto non-dominated front c......, the issue of obtaining a diverse set of solutions for badly scaled objective functions will be investigated and proposed solutions will be implemented using the NSGA-II algorithm....
Eliminating harmonics in line to line voltage using genetic algorithm using multilevel inverter
Energy Technology Data Exchange (ETDEWEB)
Gunasekaran, R. [Excel College of Engineering and Technology, Komarapalayam (India). Electrical and Electronics Engineering; Karthikeyan, C. [K.S. Rangasamy College of Engineering, Tamil Nadu (India). Electrical and Electronics Engineering
2017-04-15
In this project the total harmonic distortion (THD) minimization of the multilevel inverters output voltage is discussed. The approach in reducing harmonics contents in inverters output voltage is THD elimination. The switching angles are varied with the fundamental frequency so the output THD is minimized. In three phase applications, the line voltage harmonics are of the main concern from the load point of view. Using a genetic algorithm, a THD minimization process is directly applied to the line to line voltage of the inverter. Genetic (GA) algorithm allows the determination of the optimized parameters and consequently an optimal operating point of the circuit and a wide pass band with a unity gain is obtained.
A voltage resonance-based single-ended online fault location algorithm for DC distribution networks
Institute of Scientific and Technical Information of China (English)
JIA Ke; LI Meng; BI TianShu; YANG QiXun
2016-01-01
A novel single-ended online fault location algorithm is investigated for DC distribution networks.The proposed algorithm calculates the fault distance based on the characteristics of the voltage resonance.The Prony's method is introduced to extract the characteristics.A novel method is proposed to solve the pseudo dual-root problem in the calculation process.The multiple data windows are adopted to enhance the robustness of the proposed algorithm.An index is proposed to evaluate the accuracy and validity of the results derived from the various data windows.The performances of the proposed algorithm in different fault scenarios were evaluated using the PSCAD/EMTDC simulations.The results show that the algorithm can locate the faults with transient resistance using the 1.6 ms data of the DC-side voltage after a fault inception and offers a good precision.
The mathematical model realization algorithm of high voltage cable
2006-01-01
At mathematical model realization algorithm is very important to know the account order of necessary relations and how it presents. Depending of loads or signal sources connection in selected points of mathematical model its very important to know as to make the equations in this point that it was possible to determine all unknown variables in this point. The number of equations which describe this point must to coincide with number of unknown variables, and matrix which describes factor...
Genetic Algorithm-Based Artificial Neural Network for Voltage Stability Assessment
Garima Singh; Laxmi Srivastava
2011-01-01
With the emerging trend of restructuring in the electric power industry, many transmission lines have been forced to operate at almost their full capacities worldwide. Due to this, more incidents of voltage instability and collapse are being observed throughout the world leading to major system breakdowns. To avoid these undesirable incidents, a fast and accurate estimation of voltage stability margin is required. In this paper, genetic algorithm based back propagation neural network (GABPNN...
Directory of Open Access Journals (Sweden)
D. Venugopal
2015-04-01
Full Text Available This paper proposes optimal location of FACTS devices in power system using Evolutionary algorithms. The location of FACTS controllers, their type and rated values are optimized simultaneously by using the proposed algorithm. From the FACTS devices family, shunt device Static Var Compensator (SVC is considered. The proposed BAT algorithm is a very effective method for the optimal choice and placement of SVC device to improve the Voltage profile of power systems. The proposed algorithm has been applied to IEEE 30 bus system.
Linear scaling algorithms: Progress and promise
Energy Technology Data Exchange (ETDEWEB)
Stechel, E.B.
1996-08-01
The goal of this laboratory-directed research and development (LDRD) project was to develop a new and efficient electronic structure algorithm that would scale linearly with system size. Since the start of the program this field has received much attention in the literature as well as in terms of focused symposia and at least one dedicated international workshop. The major success of this program is the development of a unique algorithm for minimization of the density functional energy which replaces the diagonalization of the Kohn-Sham hamiltonian with block diagonalization into explicit occupied and partially occupied (in metals) subspaces and an implicit unoccupied subspace. The progress reported here represents an important step toward the simultaneous goals of linear scaling, controlled accuracy, efficiency and transferability. The method is specifically designed to deal with localized, non-orthogonal basis sets to maximize transferability and state by state iteration to minimize any charge-sloshing instabilities and accelerate convergence. The computational demands of the algorithm do scale as the particle number, permitting applications to problems involving many inequivalent atoms. Our targeted goal is at least 10,000 inequivalent atoms on a teraflop computer. This report describes our algorithm, some proof-of-principle examples and a state of the field at the conclusion of this LDRD.
Stabilization Algorithms for Large-Scale Problems
DEFF Research Database (Denmark)
Jensen, Toke Koldborg
2006-01-01
The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
Directory of Open Access Journals (Sweden)
P. Balachennaiah
2016-06-01
Full Text Available This paper proposes a Firefly algorithm based technique to optimize the control variables for simultaneous optimization of real power loss and voltage stability limit of the transmission system. Mathematically, this issue can be formulated as nonlinear equality and inequality constrained optimization problem with an objective function integrating both real power loss and voltage stability limit. Transformers taps, unified power flow controller and its parameters have been included as control variables in the problem formulation. The effectiveness of the proposed algorithm has been tested on New England 39-bus system. Simulation results obtained with the proposed algorithm are compared with the real coded genetic algorithm for single objective of real power loss minimization and multi-objective of real power loss minimization and voltage stability limit maximization. Also, a classical optimization method known as interior point successive linear programming technique is considered here to compare the results of firefly algorithm for single objective of real power loss minimization. Simulation results confirm the potentiality of the proposed algorithm in solving optimization problems.
Special Issue on Time Scale Algorithms
2008-01-01
unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 IOP PUBLISHING METROLOGIA Metrologia 45 (2008) doi:10.1088/0026-1394/45/6/E01...special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the...Paris at the BIPM in 2002 (see Metrologia 40 (3), 2003) • 5th Symposium: in San Fernando, Spain at the ROA in 2008. The early symposia were concerned
Directory of Open Access Journals (Sweden)
Sai Ram Inkollu
2016-09-01
Full Text Available This paper presents a novel technique for optimizing the FACTS devices, so as to maintain the voltage stability in the power transmission systems. Here, the particle swarm optimization algorithm (PSO and the adaptive gravitational search algorithm (GSA technique are proposed for improving the voltage stability of the power transmission systems. In the proposed approach, the PSO algorithm is used for optimizing the gravitational constant and to improve the searching performance of the GSA. Using the proposed technique, the optimal settings of the FACTS devices are determined. The proposed algorithm is an effective method for finding out the optimal location and the sizing of the FACTS controllers. The optimal locations and the power ratings of the FACTS devices are determined based on the voltage collapse rating as well as the power loss of the system. Here, two FACTS devices are used to evaluate the performance of the proposed algorithm, namely, the unified power flow controller (UPFC and the interline power flow controller (IPFC. The Newton–Raphson load flow study is used for analyzing the power flow in the transmission system. From the power flow analysis, bus voltages, active power, reactive power, and power loss of the transmission systems are determined. Then, the voltage stability is enhanced while satisfying a given set of operating and physical constraints. The proposed technique is implemented in the MATLAB platform and consequently, its performance is evaluated and compared with the existing GA based GSA hybrid technique. The performance of the proposed technique is tested with the benchmark system of IEEE 30 bus using two FACTS devices such as, the UPFC and the IPFC.
Large-scale sequential quadratic programming algorithms
Energy Technology Data Exchange (ETDEWEB)
Eldersveld, S.K.
1992-09-01
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.
Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick
2013-05-01
This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.
Directory of Open Access Journals (Sweden)
Georgios E. Stavroulakis
2013-10-01
Full Text Available This paper presents a numerical study on optimal voltages and optimal placement of piezoelectric actuators for shape control of beam structures. A finite element model, based on Timoshenko beam theory, is developed to characterize the behavior of the structure and the actuators. This model accounted for the electromechanical coupling in the entire beam structure, due to the fact that the piezoelectric layers are treated as constituent parts of the entire structural system. A hybrid scheme is presented based on great deluge and genetic algorithm. The hybrid algorithm is implemented to calculate the optimal locations and optimal values of voltages, applied to the piezoelectric actuators glued in the structure, which minimize the error between the achieved and the desired shape. Results from numerical simulations demonstrate the capabilities and efficiency of the developed optimization algorithm in both clamped−free and clamped−clamped beam problems are presented.
Directory of Open Access Journals (Sweden)
Mohammad Marefati
2016-06-01
Full Text Available In this article, an optimized PID controller for a fuel cell is introduced. It should be noted that we did not compute the PID controller’s coefficients based on trial-and-error method; instead, imperialist competitive algorithms have been considered. At first, the problem will be formulated as an optimization problem and solved by the mentioned algorithm, and optimized results will be obtained for PID coefficients. Then one of the important kinds of fuel cells, called proton exchange membrane fuel cell, is introduced. In order to control the voltage of this fuel cell during the changes in the charges, an optimal controller is introduced, based on the imperialist competitive algorithm. In order to apply this algorithm, the problem is written as an optimization problem which includes objectives and constraints. To achieve the most desirable controller, this algorithm is used for problem solving. Simulations confirm the better performance of proposed PID controller.
Extreme-scale Algorithms and Solver Resilience
Energy Technology Data Exchange (ETDEWEB)
Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States)
2016-12-10
A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs, etc.); and Conflicting goals of performance, resilience, and power requirements.
Fuzzy Algorithm for Supervisory Voltage/Frequency Control of a Self Excited Induction Generator
Directory of Open Access Journals (Sweden)
Hussein F. Soliman
2006-01-01
Full Text Available This paper presents the application of a Fuzzy Logic Controller (FLC to regulate the voltage of a Self Excited Induction Generator (SEIG driven by Wind Energy Conversion Schemes (WECS. The proposed FLC is used to tune the integral gain (KI of a Proportional plus Integral (PI controller. Two types of controls, for the generator and for the wind turbine, using a FLC algorithm, are introduced in this paper. The voltage control is performed to adapt the terminal voltage via self excitation. The frequency control is conducted to adjust the stator frequency through tuning the pitch angle of the WECS blades. Both controllers utilize the Fuzzy technique to enhance the overall dynamic performance. The simulation result depicts a better dynamic response for the system under study during the starting period, and the load variation. The percentage overshoot, rising time and oscillation are better with the fuzzy controller than with the PI controller type.
Directory of Open Access Journals (Sweden)
N. H. Shamsudin
2014-05-01
Full Text Available Power losses issues persevered over few decades in the high demand utilization of energy electricity in developing countries. Thus, the radial structure of distribution network configuration is extensively used in high populated areas to ensure continuity of power supply in the event of fault. This paper proposes heuristic Genetic Algorithm known as SIGA (Selection Improvement in Genetic Algorithm in consideration of genetic operator probabilities likewise the progression of switch adjustment in Distribution Network Reconfiguration (DNR while satisfying the parameters constraints. The SIGA algorithm was embodied throughout the process in IEEE 33-bus distribution system in selection of five tie switches. As a consequence, the power losses were ranked in accordance to the minimum values and voltage profile improvement obtainable by the proposed algorithm. The results show that the SIGA performs better than GA by giving the minimized value of power losses.
DC Voltage Droop Control Implementation in the AC/DC Power Flow Algorithm: Combinational Approach
DEFF Research Database (Denmark)
Akhter, F.; Macpherson, D.E.; Harrison, G.P.
2015-01-01
In this paper, a combinational AC/DC power flow approach is proposed for the solution of the combined AC/DC network. The unified power flow approach is extended to include DC voltage droop control. In the VSC based MTDC grids, DC droop control is regarded as more advantageous in terms...... of operational flexibility, as more than one VSC station controls the DC link voltage of the MTDC system. This model enables the study of the effects of DC droop control on the power flows of the combined AC/DC system for steady state studies after VSC station outages or transient conditions without needing...... to use its complete dynamic model. Further, the proposed approach can be extended to include multiple AC and DC grids for combined AC/DC power flow analysis. The algorithm is implemented by modifying the MATPOWER based MATACDC program and the results shows that the algorithm works efficiently....
A Primary and Backup Protection Algorithm based on Voltage and Current Measurements for HVDC Grids
Abedrabbo, Mudar; Van Hertem, Dirk
2016-01-01
DC grids are susceptible to DC side faults, which lead to a rapid rise of the DC side currents. DC side faults should be detected in a very short time before fault currents cause damage to the system or equipment, e.g., exceed the maximum interruptible limits of DC circuit breaker. This paper presents a primary and backup protective data-based algorithm. The proposed algorithm depends on the local voltage and current measurements to detect and identify various kinds of faults in the HVDC grid...
EDITORIAL: Special issue on time scale algorithms
Matsakis, Demetrios; Tavella, Patrizia
2008-12-01
This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than
Directory of Open Access Journals (Sweden)
Yujiao Zeng
2014-01-01
Full Text Available This study presents a novel hybrid multiobjective particle swarm optimization (HMOPSO algorithm to solve the optimal reactive power dispatch (ORPD problem. This problem is formulated as a challenging nonlinear constrained multiobjective optimization problem considering three objectives, that is, power losses minimization, voltage profile improvement, and voltage stability enhancement simultaneously. In order to attain better convergence and diversity, this work presents the use of combing the classical MOPSO with Gaussian probability distribution, chaotic sequences, dynamic crowding distance, and self-adaptive mutation operator. Moreover, multiple effective strategies, such as mixed-variable handling approach, constraint handling technique, and stopping criteria, are employed. The effectiveness of the proposed algorithm for solving the ORPD problem is validated on the standard IEEE 30-bus and IEEE 118-bus systems under nominal and contingency states. The obtained results are compared with classical MOPSO, nondominated sorting genetic algorithm (NSGA-II, multiobjective evolutionary algorithm based on decomposition (MOEA/D, and other methods recently reported in the literature from the point of view of Pareto fronts, extreme, solutions and multiobjective performance metrics. The numerical results demonstrate the superiority of the proposed HMOPSO in solving the ORPD problem while strictly satisfying all the constraints.
HYBRID EVOLUTIONARY ALGORITHMS FOR FREQUENCY AND VOLTAGE CONTROL IN POWER GENERATING SYSTEM
Directory of Open Access Journals (Sweden)
A. Soundarrajan
2010-10-01
Full Text Available Power generating system has the responsibility to ensure that adequate power is delivered to the load, both reliably and economically. Any electrical system must be maintained at the desired operating level characterized by nominal frequency and voltage profile. But the ability of the power system to track the load is limited due to physical and technical consideration. Hence, a Power System Control is required to maintain a continuous balance between power generation and load demand. The quality of power supply is affected due to continuous and random changes in load during the operation of the power system. Load Frequency Controller (LFC and Automatic Voltage Regulator (AVR play an important role in maintaining constant frequency and voltage in order to ensure the reliability of electric power. The fixed gain PID controllers used for this application fail to perform under varying load conditions and hence provide poor dynamic characteristics with large settling time, overshoot and oscillations. In this paper, Evolutionary Algorithms (EA like, Enhanced Particle Swarm Optimization (EPSO, Multi Objective Particle Swarm Optimization (MOPSO, and Stochastic Particle Swarm Optimization (SPSO are proposed to overcome the premature convergence problem in a standard PSO. These algorithms reduce transient oscillations and also increase the computational efficiency. Simulation results demonstrate that the proposed controller adapt themselves appropriate to varying loads and hence provide better performance characteristics with respect to settling time, oscillations and overshoot.
Dynamic Consensus Algorithm based Distributed Voltage Harmonic Compensation in Islanded Microgrids
DEFF Research Database (Denmark)
Meng, Lexuan; Tang, Fen; Firoozabadi, Mehdi Savaghebi
2015-01-01
In islanded microgrids, the existence of nonlinear electric loads may cause voltage distortion and affect the performance of power quality sensitive equipment. Thanks to the prevalent utilization of interfacing power electronic devices and information/communication technologies, distributed...... generators can be employed as compensators to enhance the power quality on consumer side. However, conventional centralized control is facing obstacles because of the distributed fashion of generation and consumption. Accordingly, this paper proposes a consensus algorithm based distributed hierarchical...... control to realize voltage harmonic compensation and accurate current sharing in multi-bus islanded microgrids. Low order harmonic components are considered as examples in this paper. Harmonic current sharing is also realized among distributed generators by applying the proposed methods. Plug...
Multitree Algorithms for Large-Scale Astrostatistics
March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.
2012-03-01
Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly
Directory of Open Access Journals (Sweden)
R. Kalaivani
2012-01-01
Full Text Available Problem statement: Voltage instability and voltage collapse have been considered as a major threat to present power system networks due to their stressed operation. It is very important to do the power system analysis with respect to voltage stability. Approach: Flexible AC Transmission System (FACTS is an alternating current transmission system incorporating power electronic-based and other static controllers to enhance controllability and increase power transfer capability. A FACTS device in a power system improves the voltage stability, reduces the power loss and also improves the load ability of the system. Results: This study investigates the application of Particle Swarm Optimization (PSO and Genetic Algorithm (GA to find optimal location and rated value of Static Var Compensator (SVC device to minimize the voltage stability index, total power loss, load voltage deviation, cost of generation and cost of FACTS devices to improve voltage stability in the power system. Optimal location and rated value of SVC device have been found in different loading scenario (115%, 125% and 150% of normal loading using PSO and GA. Conclusion/Recommendations: It is observed from the results that the voltage stability margin is improved, the voltage profile of the power system is increased, load voltage deviation is reduced and real power losses also reduced by optimally locating SVC device in the power system. The proposed algorithm is verified with the IEEE 14 bus, IEEE 30 bus and IEEE 57 bus.
Weston, Joseph; Waintal, Xavier
2016-04-01
We report on a "source-sink" algorithm which allows one to calculate time-resolved physical quantities from a general nanoelectronic quantum system (described by an arbitrary time-dependent quadratic Hamiltonian) connected to infinite electrodes. Although mathematically equivalent to the nonequilibrium Green's function formalism, the approach is based on the scattering wave functions of the system. It amounts to solving a set of generalized Schrödinger equations that include an additional "source" term (coming from the time-dependent perturbation) and an absorbing "sink" term (the electrodes). The algorithm execution time scales linearly with both system size and simulation time, allowing one to simulate large systems (currently around 106 degrees of freedom) and/or large times (currently around 105 times the smallest time scale of the system). As an application we calculate the current-voltage characteristics of a Josephson junction for both short and long junctions, and recover the multiple Andreev reflection physics. We also discuss two intrinsically time-dependent situations: the relaxation time of a Josephson junction after a quench of the voltage bias, and the propagation of voltage pulses through a Josephson junction. In the case of a ballistic, long Josephson junction, we predict that a fast voltage pulse creates an oscillatory current whose frequency is controlled by the Thouless energy of the normal part. A similar effect is found for short junctions; a voltage pulse produces an oscillating current which, in the absence of electromagnetic environment, does not relax.
Genetic Algorithm Used for Load Shedding Based on Sensitivity to Enhance Voltage Stability
Titare, L. S.; Singh, P.; Arya, L. D.
2014-12-01
This paper presents an algorithm to calculate optimum load shedding with voltage stability consideration based on sensitivity of proximity indicator using genetic algorithm (GA). Schur's inequality based proximity indicator of load flow Jacobian has been selected, which indicates system state. Load flow Jacobian of the system is obtained using Continuation power flow method. If reactive power and active rescheduling are exhausted, load shedding is the last line of defense to maintain the operational security of the system. Load buses for load shedding have been selected on the basis of sensitivity of proximity indicator. The load bus having large sensitivity is selected for load shedding. Proposed algorithm predicts load bus rank and optimum load to be shed on load buses. The algorithm accounts inequality constraints not only in present operating conditions, but also for predicted next interval load (with load shedding). Developed algorithm has been implemented on IEEE 6-bus system. Results have been compared with those obtained using Teaching-Learning-Based Optimization (TLBO), particle swarm optimization (PSO) and its variant.
Line edge roughness induced threshold voltage variability in nano-scale FinFETs
Rathore, Rituraj Singh; Sharma, Rajneesh; Rana, Ashwani K.
2017-03-01
In aggressively scaled devices, the FinFET technology has become more prone to line edge roughness (LER) induced threshold voltage variability. As a result, nano scale FinFET structures face the problem of intrinsic statistical fluctuations in the threshold voltage. This paper describes the all LER induced variability of threshold voltage for 14 nm underlap FinFET using 3-D numerical simulations. It is concluded that percentage threshold voltage (VTH) fluctuations referenced with respect to rectangular FinFET can go up to 8.76%. This work has also investigated the impact of other sources of variability such as random dopant fluctuation, work function variation and oxide thickness variation on threshold voltage.
Directory of Open Access Journals (Sweden)
S. Sakthivel
2013-07-01
Full Text Available Modern power system networks are operated under highly stressed conditions and there is a risk of voltage instability problems owing to increased load demand. A power system needs to be with sufficient voltage stability margin for secured operation. In this study, SVC parameters of location and size along with generator bus voltages, transformer tap settings are considered as control parameters for voltage stability limit improvement by minimizing loss and voltage deviation. The control parameters are varied in a coordinated manner for better results. The line based LQP voltage stability indicator is used for voltage stability assessment. The nature inspired meta heuristic Big Bang-Big Crunch (BB-BC algorithm is exploited for optimization of the control variables and the performance is compared with that of PSO algorithm. The effectiveness of the proposed algorithm is tested on the standard IEEE 30 bus system under normal and N-1 line outage contingency conditions. The results obtained from the simulation encourage the performances of the new algorithm.
Efficient implementation of the adaptive scale pixel decomposition algorithm
Zhang, L; Rau, U; Zhang, M
2016-01-01
Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used on image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims. However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make...
DEFF Research Database (Denmark)
Pop, Paul; Poulsen, Kåre Harbo; Izosimov, Viacheslav;
2007-01-01
In this paper we present an approach to the scheduling and voltage scaling of low-power fault-tolerant hard real-time applications mapped on distributed heterogeneous embedded systems. Processes and messages are statically scheduled, and we use process re-execution for recovering from multiple tr...... are satisfied and the energy is minimized. We present a constraint logic programming- based approach which is able to find reliable and schedulable implementations within limited energy and hardware resources. The developed algorithms have been evaluated using extensive experiments....
Scaling Radar Measurements for Advanced Algorithms
2010-05-01
hosted on the DSO. The LVRTS’s voltage controlled oscillator ( VCO ) generates the local oscillator (LO) waveform, with frequency ω0 ≈ 2π(9.8 × 10 9), to...M1 M2 CH0 M1 DSO Ext. Trig. D S O -0 D S O -1 RSA Input VCO PCC TX0 TX1 S0 S1 Local Area Network CH1{ 0f̂ 0f TRC Fig. 1. Connection block diagram for
DEFF Research Database (Denmark)
Sanjeevikumar, Padmanaban; Grandi, Gabriele; Wheeler, Patrick
2015-01-01
This paper presents the novel topology of Photo Voltaic (PV) power generation system with simple Maximum Power Point Tracking (MPPT) algorithm in voltage operating mode. Power circuit consists of high output voltage DC-DC boost converter which maximizes the output of PV panel. Usually traditional...... of DC-DC converters for PV integration. Hence, to overcome these difficulties this paper investigates a DC-DC boost converter together with the additional parasitic component within the circuit to provide high output voltages for maximizing the PV power generation. The proposed power system circuit...... substantially improves the high output-voltage by a simple MPPT closed loop proportional-integral (P-I) controller, and requires only two sensor for feedback needs. The complete numerical model of the converter circuit along with PV MPPT algorithm is developed in numerical simulation (Matlab/Simulink) software...
Algorithmic foundation of multi-scale spatial representation
Li, Zhilin
2006-01-01
With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...
Approaches for Scaling DBSCAN Algorithm to Large Spatial Databases
Institute of Scientific and Technical Information of China (English)
周傲英; 周水庚; 曹晶; 范晔; 胡运发
2000-01-01
The huge amount of information stored in databases owned by corporations (e.g., retail, financial, telecom) has spurred a tremendous interest in the area of knowledge discovery and data mining. Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recognition, image processing, and other business applications. Although researchers have been working on clustering algorithms for decades, and a lot of algorithms for clustering have been developed, there is still no efficient algorithm for clustering very large databases and high dimensional data. As an outstanding representative of clustering algorithms, DBSCAN algorithm shows good performance in spatial data clustering. However, for large spatial databases, DBSCAN requires large volume of memory support and could incur substantial I/O costs because it operates directly on the entire database. In this paper, several approaches are proposed to scale DBSCAN algorithm to large spatial databases. To begin with, a fast DBSCAN algorithm is developed, which considerably speeds up the original DBSCAN algorithm. Then a sampling based DBSCAN algorithm, a partitioning-based DBSCAN algorithm, and a parallel DBSCAN algorithm are introduced consecutively. Following that, based on the above-proposed algorithms, a synthetic algorithm is also given. Finally, some experimental results are given to demonstrate the effectiveness and efficiency of these algorithms.
Scale transform algorithm used in FMCW SAR data processing
Institute of Scientific and Technical Information of China (English)
Jiang Zhihong; Kan Huangfu; Wan Jianwei
2007-01-01
The frequency-modulated continuous-wave (FMCW) synthetic aperture radar (SAR) is a light-weight,cost-effective, high-resolution imaging radar, which is suitable for a small flight platform. The signal model is derived for FMCW SAR used in unmanned aerial vehicles (UAV) reconnaissance and remote sensing. An appropriate algorithm is proposed. The algorithm performs the range cell migration correction (RCMC) for continuous nonchirped raw data using the energy invariance of the scaling of a signal in the scale domain. The azimuth processing is based on step transform without geometric resampling operation. The complete derivation of the algorithm is presented. The algorithm performance is shown by simulation results.
Voltage Profile Improvement in Distribution System Using Particle Swarm Optimization Algorithm
Directory of Open Access Journals (Sweden)
V.Veera Nagireddy
2016-06-01
Full Text Available The traditional method in electric power distribution is to have centralized plants distributing electricity through an extensive distribution network. Distributed generation (DG provides electric power at a site closer to the customer which reduces the transmission and distribution costs, reduces fossil fuel emissions, capital cost, reduce maintenance costs and improve the distribution feeder voltage profiles. In the case of small generation systems, the locations of DG and penetration level of DG are usually not priori known. In this paper, Particle Swarm Optimization (PSO algorithm attempts to calculate the boundaries of the randomly placed distributed generators in a distribution network. simulations are performed using MATLAB, and overall better improvements are determined with estimated DG size and location. The proposed PSO approach is compared with conventional method on IEEE 34 bus distribution feeder network
Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J
2016-12-01
This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.
DEFF Research Database (Denmark)
Ni, Ronggang; Xu, Dianguo; Blaabjerg, Frede
2017-01-01
relationship with the magnetic field distortion. Position estimation errors caused by higher order harmonic inductances and voltage harmonics generated by the SVPWM are also discussed. Both simulations and experiments are carried out based on a commercial PMSM to verify the superiority of the proposed method...
A capacity scaling algorithm for convex cost submodular flows
Energy Technology Data Exchange (ETDEWEB)
Iwata, Satoru [Kyoto Univ. (Japan)
1996-12-31
This paper presents a scaling scheme for submodular functions. A small but strictly submodular function is added before scaling so that the resulting functions should be submodular. This scaling scheme leads to a weakly polynomial algorithm to solve minimum cost integral submodular flow problems with separable convex cost functions, provided that an oracle for exchange capacities are available.
Solar Cell Parameters Extraction from a Current-Voltage Characteristic Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Sanjaykumar J. Patel
2013-05-01
Full Text Available The determination of solar cell parameters is very important for the evaluation of the cell performance as well as to extract maximum possible output power from the cell. In this paper, we propose a computational based binary-coded genetic algorithm (GA to extract the parameters (I0, Iph and n for a single diode model of solar cell from its current-voltage (I-V characteristic. The algorithm was implemented using LabVIEW as a programming tool and validated by applying it to the I-V curve synthesized from the literature using reported values. The values of parameters obtained by GA are in good agreement with those of the reported values for silicon and plastic solar cells. change to “After the validation of the program, it was used to extract parameters for an experimental I-V characteristic of 4 × 4 cm2 polycrystalline silicon solar cell measured under 900 W/m. The I-V characteristic obtained using GA shows excellent match with the experimental one.
A Revive on 32×32 Bit Multiprecision Dynamic Voltage Scaling Multiplier with Operands Scheduler
Directory of Open Access Journals (Sweden)
Mrs.S.N.Rawat
2016-02-01
Full Text Available In this paper, we present a Multiprecision (MP reconfigurable multiplier that incorporates variable precision, parallel processing (PP, razor-based dynamic voltage scaling (DVS, and dedicated MP operands scheduling to provide optimum performance for a variety of operating conditions. All of the building blocks of the proposed reconfigurable multiplier can either work as independent smaller-precision multipliers or work in parallel to perform higher-precision multiplications. Given the user’s requirements (e.g., throughput, a dynamic voltage/ frequency scaling management unit configures the multiplier to operate at the proper precision and frequency. Adapting to the run-time workload of the targeted application, razor flip-flops together with a dithering voltage unit then configure the multiplier to achieve the lowest power consumption. The single-switch dithering voltage unit and razor flip-flops help to reduce the voltage safety margins and overhead typically associated to DVS to the lowest level. The large silicon area and power overhead typically associated to reconfigurability features are removed. Finally, the proposed novel MP multiplier can further benefit from an operands scheduler that rearranges the input data, hence to determine the optimum voltage and frequency operating conditions for minimum power consumption. This low-power MP multiplier is fabricated in AMIS 0.35-μm technology. Experimental results show that the proposed MP design features a 28.2% and 15.8% reduction in circuit area and power consumption compared with conventional fixed-width multiplier. When combining this MP design with error-tolerant razor-based DVS, PP, and the proposed novel operands scheduler, 77.7%–86.3% total power reduction is achieved with a total silicon area overhead as low as 11.1%. This paper successfully demonstrates that a MP architecture can allow more aggressive frequency/supply voltage scaling for improved power efficiency
Efficient implementation of the adaptive scale pixel decomposition algorithm
Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.
2016-08-01
Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.
A Multi-Scale Gradient Algorithm Based on Morphological Operators
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Watershed transformation is a powerful morphological tool for image segmentation. However, the performance of the image segmentation methods based on watershed transformation depends largely on the algorithm for computing the gradient of the image to be segmented. In this paper, we present a multi-scale gradient algorithm based on morphological operators for watershed-based image segmentation, with effective handling of both step and blurred edges. We also present an algorithm to eliminate the local minima produced by noise and quantization errors. Experimental results indicate that watershed transformation with the algorithms proposed in this paper produces meaningful segmentations, even without a region-merging step.
Directory of Open Access Journals (Sweden)
Meron Gurkiewicz
2007-08-01
Full Text Available The activity of trans-membrane proteins such as ion channels is the essence of neuronal transmission. The currently most accurate method for determining ion channel kinetic mechanisms is single-channel recording and analysis. Yet, the limitations and complexities in interpreting single-channel recordings discourage many physiologists from using them. Here we show that a genetic search algorithm in combination with a gradient descent algorithm can be used to fit whole-cell voltage-clamp data to kinetic models with a high degree of accuracy. Previously, ion channel stimulation traces were analyzed one at a time, the results of these analyses being combined to produce a picture of channel kinetics. Here the entire set of traces from all stimulation protocols are analysed simultaneously. The algorithm was initially tested on simulated current traces produced by several Hodgkin-Huxley-like and Markov chain models of voltage-gated potassium and sodium channels. Currents were also produced by simulating levels of noise expected from actual patch recordings. Finally, the algorithm was used for finding the kinetic parameters of several voltage-gated sodium and potassium channels models by matching its results to data recorded from layer 5 pyramidal neurons of the rat cortex in the nucleated outside-out patch configuration. The minimization scheme gives electrophysiologists a tool for reproducing and simulating voltage-gated ion channel kinetics at the cellular level.
Panwar, Ramesh; Rennels, David; Alkalaj, Leon
1993-01-01
A technique for minimizing the power dissipated in a Very Large Scale Integration (VLSI) chip by lowering the operating voltage without any significant penalty in the chip throughput even though low voltage operation results in slower circuits. Since the overall throughput of a VLSI chip depends on the speed of the critical path(s) in the chip, it may be possible to sustain the throughput rates attained at higher voltages by operating the circuits in the critical path(s) with a high voltage while operating the other circuits with a lower voltage to minimize the power dissipation. The interface between the gates which operate at different voltages is crucial for low power dissipation since the interface may possibly have high static current dissipation thus negating the gains of the low voltage operation. The design of a voltage level translator which does the interface between the low voltage and high voltage circuits without any significant static dissipation is presented. Then, the results of the mixed voltage design using a greedy algorithm on three chips for various operating voltages are presented.
Aranza, M. F.; Kustija, J.; Trisno, B.; Hakim, D. L.
2016-04-01
PID Controller (Proportional Integral Derivative) was invented since 1910, but till today still is used in industries, even though there are many kind of modern controllers like fuzz controller and neural network controller are being developed. Performance of PID controller is depend on on Proportional Gain (Kp), Integral Gain (Ki) and Derivative Gain (Kd). These gains can be got by using method Ziegler-Nichols (ZN), gain-phase margin, Root Locus, Minimum Variance dan Gain Scheduling however these methods are not optimal to control systems that nonlinear and have high-orde, in addition, some methods relative hard. To solve those obstacles, particle swarm optimization (PSO) algorithm is proposed to get optimal Kp, Ki and Kd. PSO is proposed because PSO has convergent result and not require many iterations. On this research, PID controller is applied on AVR (Automatic Voltage Regulator). Based on result of analyzing transient, stability Root Locus and frequency response, performance of PID controller is better than Ziegler-Nichols.
Directory of Open Access Journals (Sweden)
R.KALAIVANI
2016-10-01
Full Text Available Due to huge increase in power demand, power system network will lead to major problems such as voltage instability and voltage collapse in the power system. To overcome these problems, Flexible AC Transmission System (FACTS devices have been implemented in power system. By placing these devices in suitable locations, the power system can be operated far away from the instability point. In this paper, the optimal location and the ratings of FACTS devices such as Thyristor Controlled Series Capacitor (TCSC, Static VAR Compensator (SVC and Unified Power Flow Controller (UPFC are determined using Genetic Algorithm (GA. A multi objective optimization problem is formulated with the consideration of minimizing voltage stability index, real power loss and generator cost. Evolutionary algorithm such as GA is a population based search method is used for solving multi objective optimization problem that is capable of searching for multiple solutions concurrently in a single run and provide an optimal solution. It is observed from the results that the voltages stability index, real power loss and generator cost are reduced by optimally locating the FACTS devices in the power system. IEEE 14 bus and IEEE 57 bus systems are used to demonstrate the effectiveness of the proposed algorithm.
DEFF Research Database (Denmark)
Gloos, K.; Utko, P.; Aagesen, M.;
2006-01-01
We investigate the I(V) characteristics (current versus bias voltage) of side-gated quantum-point contacts, defined in GaAs/AlxGa1-xAs heterostructures. These point contacts are operated in the closed-channel regime, that is, at fixed gate voltages below zero-bias pinch-off for conductance. Our...... analysis is based on a single scaling factor, extracted from the experimental I(V) characteristics. For both polarities, this scaling factor transforms the change of bias voltage into a change of electron energy. The latter is determined with respect to the top of the potential barrier of the contact....... Such a built-in energy-voltage calibration allows us to distinguish between the different contributions to the electron transport across the pinched-off contact due to thermal activation or quantum tunneling. The first involves the height of the barrier, and the latter also its length. In the model that we...
Modified Frequency Scaling Algorithm for FMCW SAR Data Processing
Institute of Scientific and Technical Information of China (English)
Jiang Zhihong; Huang Fukan; Wan Jianwei; Cheng Zhu
2007-01-01
This paper presents a modified frequency scaling algorithm for frequency modulated continuous wave synthetic aperture radar(FMCW SAR) data processing. The relative motion between radar and target in FMCW SAR during reception and between transmission and reception will introduce serious dilation in the received signal. The dilation can cause serious distortions in the reconstructed images using conventional signal processing methods. The received signal is derived and the received signal in range-Doppler domain is given.The relation between the phase resulting from antenna motion and the azimuth frequency is analyzed. The modified frequency scaling algorithm is proposed to process the received signal with serious dilation. The algorithm can effectively eliminate the impact of the dilation. The algorithm performances are shown by the simulation results.
Linear-scaling and parallelizable algorithms for stochastic quantum chemistry
Booth, George H; Alavi, Ali
2013-01-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimized paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can often achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelization which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the metho...
Scale invariance of entanglement dynamics in Grover's quantum search algorithm
Rossi, M; Macchiavello, C
2012-01-01
We calculate the amount of entanglement of the multiqubit quantum states employed in the Grover algorithm, by following its dynamics at each step of the computation. We show that genuine multipartite entanglement is always present. Remarkably, the dynamics of any type of entanglement as well as of genuine multipartite entanglement is independent of the number $n$ of qubits for large $n$, thus exhibiting a scale invariance property. We also investigate criteria for efficient simulatability in the context of Grover's algorithm.
AN ADVANCED SCALE INVARIANT FEATURE TRANSFORM ALGORITHM FOR FACE RECOGNITION
Mohammad Mohsen Ahmadinejad; Elizabeth Sherly
2016-01-01
In computer vision, Scale-invariant feature transform (SIFT) algorithm is widely used to describe and detect local features in images due to its excellent performance. But for face recognition, the implementation of SIFT was complicated because of detecting false key-points in the face image due to irrelevant portions like hair style and other background details. This paper proposes an algorithm for face recognition to improve recognition accuracy by selecting relevant SIFT key-points only th...
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.
Directory of Open Access Journals (Sweden)
Ling Ai Wong
2014-01-01
Full Text Available This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.
Kernel Projection Algorithm for Large-Scale SVM Problems
Institute of Scientific and Technical Information of China (English)
王家琦; 陶卿; 王珏
2002-01-01
Support Vector Machine (SVM) has become a very effective method in sta-tistical machine learning and it has proved that training SVM is to solve Nearest Point pairProblem (NPP) between two disjoint closed convex sets. Later Keerthi pointed out that it isdifficult to apply classical excellent geometric algorithms directly to SVM and so designed anew geometric algorithm for SVM. In this article, a new algorithm for geometrically solvingSVM, Kernel Projection Algorithm, is presented based on the theorem on fixed-points of pro-jection mapping. This new algorithm makes it easy to apply classical geometric algorithmsto solving SVM and is more understandable than Keerthi's. Experiments show that the newalgorithm can also handle large-scale SVM problems. Geometric algorithms for SVM, such asKeerthi's algorithm, require that two closed convex sets be disjoint and otherwise the algo-rithms are meaningless. In this article, this requirement will be guaranteed in theory by usingthe theoretic result on universal kernel functions.
DEFF Research Database (Denmark)
Tafti, Hossein Dehghani; Maswood, Ali Iftekhar; Pou, Josep
2016-01-01
Due to the high penetration of the installed distributed generation units in the power system, the injection of reactive power is required for the medium-scale and large-scale grid-connected photovoltaic power plants (PVPPs). Because of the current limitation of the grid-connected inverter......, the injected active power should be reduced during voltage sags. In order to obtain a constant dc-link voltage in a multi-string PVPP, the extracted power from PV strings should be equal to the injected power to the grid in all operating conditions (excluding power losses). Therefore, the extracted power of PV...
Low-power operation using self-timed circuits and adaptive scaling of the supply voltage
DEFF Research Database (Denmark)
Nielsen, Lars Skovby; Niessen, C.; Sparsø, Jens
1994-01-01
Recent research has demonstrated that for certain types of applications like sampled audio systems, self-timed circuits can achieve very low power consumption, because unused circuit parts automatically turn into a stand-by mode. Additional savings may be obtained by combining the self...... of voltage scaling has been used previously in synchronous circuits, and the contributions of the present paper are: 1) the combination of supply scaling and self-timed circuitry which has some unique advantages, and 2) the thorough analysis of the power savings that are possible using this technique.>......-timed circuits with a mechanism that adaptively adjusts the supply voltage to the smallest possible, while maintaining the performance requirements. This paper describes such a mechanism, analyzes the possible power savings, and presents a demonstrator chip that has been fabricated and tested. The idea...
Shonin, O. B.; Kryltcov, S. B.; Novozhilov, N. G.
2017-02-01
The paper considers a new technique for the fast method of extracting symmetrical components of unbalanced voltages caused by the faults in electric grids of mechanical engineering facilities. The proposed approach is based on the iterative algorithm that checks if the set of at least three voltage discrete measurements belongs to a specific ellipse trajectory of the voltage space vector. Using classification of unbalanced faults in the grid and results of decomposing the voltages into symmetrical components, the algorithm is capable to discriminate between one-phase, two-phase and three-phase voltage sags. The paper concludes that results of simulation in Simulink environment have proved the correctness of the proposed algorithm for detecting and identifying the unbalanced voltage sags in the electrical grid under condition that it is free from high order harmonics.
A miniature high-efficiency fully digital adaptive voltage scaling buck converter
Li, Hangbiao; Zhang, Bo; Luo, Ping; Zhen, Shaowei; Liao, Pengfei; He, Yajuan; Li, Zhaoji
2015-09-01
A miniature high-efficiency fully digital adaptive voltage scaling (AVS) buck converter is proposed in this paper. The pulse skip modulation with flexible duty cycle (FD-PSM) is used in the AVS controller, which simplifies the circuit architecture (<170 gates) and greatly saves the die area and the power consumption. The converter is implemented in a 0.13-μm one-poly-eight-metal (1P8 M) complementary metal oxide semiconductor process and the active on-chip area of the controller is only 0.003 mm2, which is much smaller. The measurement results show that when the operating frequency of the digital load scales dynamically from 25.6 MHz to 112.6 MHz, the supply voltage of which can be scaled adaptively from 0.84 V to 1.95 V. The controller dissipates only 17.2 μW, while the supply voltage of the load is 1 V and the operating frequency is 40 MHz.
Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks
Yang, Qiang; Wu, Chunming; Zhang, Min
The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.
Efficient algorithms for collaborative decision making for large scale settings
DEFF Research Database (Denmark)
Assent, Ira
2011-01-01
Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C; Shindler, A; Wenger, U
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at beta=5.6 and at pion masses ranging from 380 MeV to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the ``Berlin Wall'' figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Energy Technology Data Exchange (ETDEWEB)
Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Non-linear Frequency Scaling Algorithm for FMCW SAR Data
Meta, A.; Hoogeboom, P.; Ligthart, L.P.
2006-01-01
This paper presents a novel approach for processing data acquired with Frequency Modulated Continuous Wave (FMCW) dechirp-on-receive systems by using a non-linear frequency scaling algorithm. The range frequency non-linearity correction, the Doppler shift induced by the continuous motion and the ran
Comparison of in-situ delay monitors for use in Adaptive Voltage Scaling
Pour Aryan, N.; Heiß, L.; Schmitt-Landsiedel, D.; Georgakos, G.; Wirnshofer, M.
2012-09-01
In Adaptive Voltage Scaling (AVS) the supply voltage of digital circuits is tuned according to the circuit's actual operating condition, which enables dynamic compensation to PVTA variations. By exploiting the excessive safety margins added in state-of-the-art worst-case designs considerable power saving is achieved. In our approach, the operating condition of the circuit is monitored by in-situ delay monitors. This paper presents different designs to implement the in-situ delay monitors capable of detecting late but still non-erroneous transitions, called Pre-Errors. The developed Pre-Error monitors are integrated in a 16 bit multiplier test circuit and the resulting Pre-Error AVS system is modeled by a Markov chain in order to determine the power saving potential of each Pre-Error detection approach.
Lam, Simon K. H.
2017-09-01
A promising direction to improve the sensitivity of a SQUID is to increase its junction's normal resistance value, Rn, as the SQUID modulation voltage scales linearly with Rn. As a first step to develop highly sensitive single layer SQUID, submicron scale YBCO grain boundary step edge junctions and SQUIDs with large Rn were fabricated and studied. The step-edge junctions were reduced to submicron scale to increase their Rn values using focus ion beam, FIB and the measurement of transport properties were performed from 4.3 to 77 K. The FIB induced deposition layer proves to be effective to minimize the Ga ion contamination during the FIB milling process. The critical current-normal resistance value of submicron junction at 4.3 K was found to be 1-3 mV, comparable to the value of the same type of junction in micron scale. The submicron junction Rn value is in the range of 35-100 Ω, resulting a large SQUID modulation voltage in a wide temperature range. This performance promotes further investigation of cryogen-free, high field sensitivity SQUID applications at medium low temperature, e.g. at 40-60 K.
Reactive power and voltage control based on general quantum genetic algorithms
DEFF Research Database (Denmark)
Vlachogiannis, Ioannis (John); Østergaard, Jacob
2009-01-01
This paper presents an improved evolutionary algorithm based on quantum computing for optima l steady-state performance of power systems. However, the proposed general quantum genetic algorithm (GQ-GA) can be applied in various combinatorial optimization problems. In this study the GQ-GA determines...... techniques such as enhanced GA, multi-objective evolutionary algorithm and particle swarm optimization algorithms, as well as the classical primal-dual interior-point optimal power flow algorithm. The comparison demonstrates the ability of the GQ-GA in reaching more optimal solutions....
Self Avoiding Paths Routing Algorithm in Scale-Free Networks
Rachadi, Abdeljalil; Zahid, Noureddine
2013-01-01
In this paper, we present a new routing algorithm called "the Self Avoiding Paths Routing Algorithm". Its application to traffic flow in scale-free networks shows a great improvement over the so called "efficient routing" protocol while at the same time maintaining a relatively low average packet travel time. It has the advantage of minimizing path overlapping throughout the network in a self consistent manner with a relatively small number of iterations by maintaining an equilibrated path distribution especially among the hubs. This results in a significant shifting of the critical packet generation rate over which traffic congestion occurs, thus permitting the network to sustain more information packets in the free flow state. The performance of the algorithm is discussed both on a Bar\\'abasi-Albert (BA) network and real autonomous system (AS) network data.
Directory of Open Access Journals (Sweden)
Mahidur R. Sarker
2016-09-01
Full Text Available This paper presents a new method for a vibration-based piezoelectric energy harvesting system using a backtracking search algorithm (BSA-based proportional-integral (PI voltage controller. This technique eliminates the exhaustive conventional trial-and-error procedure for obtaining optimized parameter values of proportional gain (Kp, and integral gain (Ki for PI voltage controllers. The generated estimate values of Kp and Ki are executed in the PI voltage controller that is developed through the BSA optimization technique. In this study, mean absolute error (MAE is used as an objective function to minimize output error for a piezoelectric energy harvesting system (PEHS. The model for the PEHS is designed and analyzed using the BSA optimization technique. The BSA-based PI voltage controller of the PEHS produces a significant improvement in minimizing the output error of the converter and a robust, regulated pulse-width modulation (PWM signal to convert a MOSFET switch, with the best response in terms of rise time and settling time under various load conditions.
Improved chirp scaling algorithm for parallel-track bistatic SAR
Institute of Scientific and Technical Information of China (English)
Li Feng; Li Shu; Zhao Yigong
2009-01-01
The curvature factor of the parallel-track bistatic SAR is range dependent, even without variation of the effective velocity. Accounting for this new characteristic, a parallel-track chirp scaling algorithm (CSA) is derived, by introducing the method of removal of range walk (RRW) in the time domain. Using the RRW before the CSA, this method can reduce the varying range of the curvature factor, without increasing the computation load obviously. The azimuth dependence of the azimuth-FM rate, resulting from the RRW, is compensated by the nonlinear chirp scaling factor. Therefore, the algorithm is extended into stripmap imaging. The realization of the method is presented and is verified by the simulation results.
Voltage and Pressure Scaling of Streamer Dynamics in a Helium Plasma Jet With N2 CO-Flow (Postprint)
2014-08-14
increased applied voltage. These observed differences in the 2-D scaling properties of ionization wave sustained cathode directed streamer propagation in...streamer propagation in helium versus air16,17 are responsible for the observed differences in the propagation of ionization wave sustained streamer...increase quadratically with increased applied voltage. These observed differences in the 2-D scaling properties of ionization wave sustained cathode
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Energy Technology Data Exchange (ETDEWEB)
Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States)
2016-06-21
The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
El-Zoghby, Helmy M.; Bendary, Ahmed F.
2016-10-01
In this paper Static Synchronous Compensator (STATCOM) is used for improving the performance of the power grid with wind turbine that drives synchronous generator. The main feature of the STATCOM is that it has the ability to absorb or inject rapidly reactive power to grid. Therefore the voltage regulation of the power grid with STATCOM device is achieved. STATCOM also improves the stability of the power system after occurring severe disturbance such as faults, or suddenly step change in wind speed. The proposed STATCOM controller is a Proportional-Integral (PI) controller tuned by Genetic Algorithm (GA). An experimental model was built in Helwan University to the proposed system. The system is tested at different operating conditions. The experimental results prove the effectiveness of the proposed STATCOM controller in damping the power system oscillations and restoring the power system voltage and stability.
Directory of Open Access Journals (Sweden)
Hamed Piarehzadeh
2012-08-01
Full Text Available In this study is tried to optimal distributed generation allocation for stability improvement in radial distribution systems. Voltage instability implies an uncontrolled decrease in voltage triggered by a disturbance, leading to voltage collapse and is primarily caused by dynamics connected with the load. The instability is divided into steady state and transient voltage instability Based on the time spectrum of the incident of the phenomena. The analysis is accomplished using a steady state voltage stability index which can be evaluated at each node of the distribution system. Several optimal capacities and locations are used to check these results. The location of DG has the main effect voltage stability on the system. Effects of location and capacity on incrementing steady state voltage stability in radial distribution systems are examined through Harmony Search Algorithm (HSA and finally the results are compared to Particle Swarm Optimization (PSO on the terms of speed, convergence and accuracy.
Control and Protection in Low Voltage Grid with Large Scale Renewable Electricity Generation
DEFF Research Database (Denmark)
Mustafa, Ghullam
of renewable energy based DGs are reduced CO2 emission, reduced operational cost as almost no fuel is used for their operation and less transmission and distribution losses as these units are normally built near to the load centers. This has also resulted in some operational challenges due to the unpredictable...... of the wind speed and solar irradiation fluctuations are tackled. The CIGRE Low Voltage (LV) network comprising two solar PV generating units of 3 kW and 4 kW, one 5.5 kW fixed-pitch fix speed WTG and two battery units each producing energy of 30kwh and 21kwh has been chosen for the study. The study...... the distribution system and the transmission grid has been proposed here. The algorithms, models and methodologies developed during this research study have been tested in a CIGRE low voltage distribution network. The simulation results show that they are able to correctly identify the states of the distribution...
DEFF Research Database (Denmark)
Meng, Lexuan; Zhao, Xin; Tang, Fen;
2016-01-01
In islanded microgrids (MGs), distributed generators (DGs) can be employed as distributed compensators for improving the power quality in the consumer side. Two-level hierarchical control can be used for voltage unbalance compensation. Primary level, consisting of droop control and virtual...... impedance, can be applied to help the positive sequence active and reactive power sharing. Secondary level is used to assist voltage unbalance compensation. However, if distribution line differences are considered, the negative sequence current cannot be well shared among DGs. In order to overcome...
Directory of Open Access Journals (Sweden)
K. Lenin
2014-04-01
Full Text Available This paper presents Hybrid Biogeography algorithm for solving the multi-objective reactive power dispatch problem in a power system. Real Power Loss minimization and maximization of voltage stability margin are taken as the objectives. Artificial bee colony optimization (ABC is quick and forceful algorithm for global optimization. Biogeography-Based Optimization (BBO is a new-fangled biogeography inspired algorithm. It mainly utilizes the biogeography-based relocation operator to share the information among solutions. In this work, a hybrid algorithm with BBO and ABC is projected, and named as HBBABC (Hybrid Biogeography based Artificial Bee Colony Optimization, for the universal numerical optimization problem. HBBABC merge the searching behavior of ABC with that of BBO. Both the algorithms have different solution probing tendency like ABC have good exploration probing tendency while BBO have good exploitation probing tendency. HBBABC used to solve the reactive power dispatch problem and the proposed technique has been tested in standard IEEE30 bus test system.
A Block-Based Multi-Scale Background Extraction Algorithm
Directory of Open Access Journals (Sweden)
Seyed H. Davarpanah
2010-01-01
Full Text Available Problem statement: To extract the moving objects, vision-based surveillance systems subtract the current image from a predefined background image. The efficiency of these systems mainly depends on accuracy of the extracted background image. It should be able to adapt to the changes continuously. In addition, especially in real-time applications the time complexity of this adaptation is a critical matter. Approach: In this study, to extract an adaptive background, a combination of blocking and multi-scale methods is presented. Because of being less sensitive to local movements, block-based techniques are proper to control the non-stationary objects movements, especially in outdoor applications. They can be useful to reduce the effect of these objects on the extracted background. We also used the blocking method to intelligently select the regions which the temporal filtering has to be applied on. In addition, an amended multi-scale algorithm is introduced. This algorithm is a hybrid algorithm, a combination of some nonparametric and parametric filters. It uses a nonparametric filter in the spatial domain to initiate two primary backgrounds. In continue two adapted two-dimensional filters will be used to extract the final background. Results: The qualitative and quantitative results of our experiments certify not only the quality of the final extracted background is acceptable, but also its time consumption is approximately half in compare to the similar methods. Conclusion: Using Multi scaling filtering and applying the filters just to some selected nonoverlapped blocks reduce the time consumption of the extracting background algorithm.
Algorithm of simulation time synchronization over large-scale nodes
Institute of Scientific and Technical Information of China (English)
ZHAO QinPing; ZHOU Zhong; Lü Fang
2008-01-01
In distributed simulation, there is no uniform physical clock. And delay cannot be estimated because of jitter. So simulation time synchronization is essential for the event consistency among nodes. This paper investigates time synchronization algorithms over large-scale distributed nodes, analyzes LBTS (lower bound time stamp) computation model described in IEEE HLA standard, and then presents a grouped LBTS model. In fact, there is a default premise for existing algorithms that control packets must be delivered via reliable transportation. Although, a theorem of time synchronization message's reliability is proposed, which proves that only those control messages that constrain time advance need reliability. It breaks out the default premise for reliability. Then multicast is introduced into the transmission of control messages, and algorithm MCTS (multi-node coordination time synchronization) is proposed based on multicast. MCTS not only promotes the time advance efficiency, but also reduces the occupied network bandwidth. Experiment results demonstrate that the algorithm is better than others in both time advance speed and occupied network bandwidth. Its time advance speed is about 50 times per second when there are 1000 nodes, approximately equal to that of similar systems when there are 100 nodes.
Energy Technology Data Exchange (ETDEWEB)
Garzillo, A.; Innorta, M.; Marannino, P.; Mognetti, F., Cova, B.
1988-09-01
This paper presents some criteria applied to the optimization of voltage profiles and reactive power generation distribution among various resources in daily scheduling and VAR planning. The mathematical models employed in the representation of the two problems are quite similar in spite of the different objective functions and control variable set. The solution is based upon the implementation of two optimal reactive power flow (ORPF) programs. The first ORPF determines a feasible operating point in daily scheduling application, or the minimum investment installations required by system security in VAR planning application. It utilizes a linear algorithm (gradient protection) suggested by Rosen which has been found to be a favourable alternative to the commonly suited simplex method. The second ORPF determines the minimum losses operating point, in the reactive power dispatch, or the most beneficial installation of reactive compensations in VAR planning. The solution of the economy problems is carried out by the Han-Powell algorithm. It essentially solves a set of quadratic sub-problems. In the adopted procedure, the quadratic sub-problems are solved by exploiting an active constraint strategy in the QUADRI subroutine used as an alternative to the well-known Beale method.
A new non-monotone fitness scaling for genetic algorithm
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The properties of selection operators in the genetic algorithm (GA) are studied in detail. It is indicated that the selection of operations is significant for both improving the general fitness of a population and leading to the schema deceptiveness. The stochastic searching characteristics of GA are compared with those of heuristic methods. The influence of selection operators on the GA' s exploration and exploitation is discussed, and the performance of selection operators is evaluated with the premature convergence of the GA taken as an example based on One-Max function. In order to overcome the schema deceptiveness of the GA, a new type of fitness scaling, non monotone scaling, is advanced to enhance the evolutionary ability of a population. The effectiveness of the new scaling method is tested by a trap function and a needle-in-haystack (NiH) function.
Multi-Scale Parameter Identification of Lithium-Ion Battery Electric Models Using a PSO-LM Algorithm
Directory of Open Access Journals (Sweden)
Wen-Jing Shen
2017-03-01
Full Text Available This paper proposes a multi-scale parameter identification algorithm for the lithium-ion battery (LIB electric model by using a combination of particle swarm optimization (PSO and Levenberg-Marquardt (LM algorithms. Two-dimensional Poisson equations with unknown parameters are used to describe the potential and current density distribution (PDD of the positive and negative electrodes in the LIB electric model. The model parameters are difficult to determine in the simulation due to the nonlinear complexity of the model. In the proposed identification algorithm, PSO is used for the coarse-scale parameter identification and the LM algorithm is applied for the fine-scale parameter identification. The experiment results show that the multi-scale identification not only improves the convergence rate and effectively escapes from the stagnation of PSO, but also overcomes the local minimum entrapment drawback of the LM algorithm. The terminal voltage curves from the PDD model with the identified parameter values are in good agreement with those from the experiments at different discharge/charge rates.
Micron-scale voltage and [Ca2+]i imaging in the intact heart
Directory of Open Access Journals (Sweden)
Xiao-long eLu
2014-12-01
Full Text Available Studies in isolated cardiomyocytes have provided tremendous information at the cellular and molecular level concerning regulation of transmembrane voltage (Vm and intracellular calcium ([Ca2+]i. The ability to use the information gleaned to gain insight into the function of ion channels and Ca2+ handling proteins in a more complex system, e. g., the intact heart, has remained a challenge. We have developed laser scanning fluorescence microscopy-based approaches to monitor, at the sub-cellular to multi-cellular level in the immobilized, Langendorff-perfused mouse heart, dynamic changes in [Ca2+]i and Vm. This article will review the use of single- or dual-photon laser scanning microscopy [Ca2+]i imaging in conjunction with transgenic reporter technology to a interrogate the extent to which transplanted, donor-derived myocytes or cardiac stem cell-derived de novo myocytes are capable of forming a functional syncytium with the pre-existing myocardium, using entrainment of [Ca2+]i transients by the electrical activity of the recipient heart as a surrogate for electrical coupling, and b characterize the Ca2+ handling phenotypes of cellular implants. Further, we will review the ability of laser scanning fluorescence microscopy in conjunction with a fast-response voltage-sensitive to resolve, on a subcellular level in Langendorff-perfused mouse hearts, Vm dynamics that typically occur during the course of a cardiac action potential. Specifically, the utility of this technique to measure microscopic-scale voltage gradients in the normal and diseased heart is discussed.
An energy-efficient, dynamic voltage scaling neural stimulator for a proprioceptive prosthesis.
Williams, Ian; Constandinou, Timothy G
2013-04-01
This paper presents an 8 channel energy-efficient neural stimulator for generating charge-balanced asymmetric pulses. Power consumption is reduced by implementing a fully-integrated DC-DC converter that uses a reconfigurable switched capacitor topology to provide 4 output voltages for Dynamic Voltage Scaling (DVS). DC conversion efficiencies of up to 82% are achieved using integrated capacitances of under 1 nF and the DVS approach offers power savings of up to 50% compared to the front end of a typical current controlled neural stimulator. A novel charge balancing method is implemented which has a low level of accuracy on a single pulse and a much higher accuracy over a series of pulses. The method used is robust to process and component variation and does not require any initial or ongoing calibration. Measured results indicate that the charge imbalance is typically between 0.05%-0.15% of charge injected for a series of pulses. Ex-vivo experiments demonstrate the viability in using this circuit for neural activation. The circuit has been implemented in a commercially-available 0.18 μm HV CMOS technology and occupies a core die area of approximately 2.8 mm(2) for an 8 channel implementation.
Large-Scale Electrochemical Energy Storage in High Voltage Grids: Overview of the Italian Experience
Directory of Open Access Journals (Sweden)
Roberto Benato
2017-01-01
Full Text Available This paper offers a wide overview on the large-scale electrochemical energy projects installed in the high voltage Italian grid. Detailed descriptions of energy (charge/discharge times of about 8 h and power intensive (charge/discharge times ranging from 0.5 h to 4 h installations are presented with some insights into the authorization procedures, safety features, and ancillary services. These different charge/discharge times reflect the different operation uses inside the electric grid. Energy intensive storage aims at decoupling generation and utilization since, in the southern part of Italy, there has been a great growth of wind farms: these areas are characterized by a surplus of generation with respect to load absorption and to the net transport capacity of the 150 kV high voltage backbones. Power intensive storage aims at providing ancillary services inside the electric grid as primary and secondary frequency regulation, synthetic rotational inertia, and further functionalities. The return on experience of Italian installations will be able to play a key role also for other countries and other transmission system operators.
An augmented Lagrangian multi-scale dictionary learning algorithm
Directory of Open Access Journals (Sweden)
Ye Meng
2011-01-01
Full Text Available Abstract Learning overcomplete dictionaries for sparse signal representation has become a hot topic fascinated by many researchers in the recent years, while most of the existing approaches have a serious problem that they always lead to local minima. In this article, we present a novel augmented Lagrangian multi-scale dictionary learning algorithm (ALM-DL, which is achieved by first recasting the constrained dictionary learning problem into an AL scheme, and then updating the dictionary after each inner iteration of the scheme during which majorization-minimization technique is employed for solving the inner subproblem. Refining the dictionary from low scale to high makes the proposed method less dependent on the initial dictionary hence avoiding local optima. Numerical tests for synthetic data and denoising applications on real images demonstrate the superior performance of the proposed approach.
Design of optimal input–output scaling factors based fuzzy PSS using bat algorithm
Directory of Open Access Journals (Sweden)
D.K. Sambariya
2016-06-01
Full Text Available In this article, a fuzzy logic based power system stabilizer (FPSS is designed by tuning its input–output scaling factors. Two input signals to FPSS are considered as change of speed and change in power, and the output signal is considered as a correcting voltage signal. The normalizing factors of these signals are considered as the optimization problem with minimization of integral of square error in single-machine and multi-machine power systems. These factors are optimally determined with bat algorithm (BA and considered as scaling factors of FPSS. The performance of power system with such a designed BA based FPSS (BA-FPSS is compared to that of response with FPSS, Harmony Search Algorithm based FPSS (HSA-FPSS and Particle Swarm Optimization based FPSS (PSO-FPSS. The systems considered are single-machine connected to infinite-bus, two-area 4-machine 10-bus and IEEE New England 10-machine 39-bus power systems for evaluating the performance of BA-FPSS. The comparison is carried out in terms of the integral of time-weighted absolute error (ITAE, integral of absolute error (IAE and integral of square error (ISE of speed response for systems with FPSS, HSA-FPSS and BA-FPSS. The superior performance of systems with BA-FPSS is established considering eight plant conditions of each system, which represents the wide range of operating conditions.
Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás
2017-01-01
Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.
An Implementation and Parallelization of the Scale Space Meshing Algorithm
Directory of Open Access Journals (Sweden)
Julie Digne
2015-11-01
Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.
DEFF Research Database (Denmark)
Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul
2013-01-01
Grid integration of Renewable Energy (RE) at large scale poses vast majority of challenges to secure and stable operation of Power System. This paper presents the challenge of short circuit power and primary voltage control of wind integrated power system where majority of conventional generators...... are replaced by wind generators. The impact of large scale wind integration on fast reactive power support is studied in this paper. Considering both technical and economic aspects, alternatives to address the challenge of dynamic voltage support have also been demonstrated in this paper. A case study...
Large scale petroleum reservoir simulation and parallel preconditioning algorithms research
Institute of Scientific and Technical Information of China (English)
SUN Jiachang; CAO Jianwen
2004-01-01
Solving large scale linear systems efficiently plays an important role in a petroleum reservoir simulator, and the key part is how to choose an effective parallel preconditioner. Properly choosing a good preconditioner has been beyond the pure algebraic field. An integrated preconditioner should include such components as physical background, characteristics of PDE mathematical model, nonlinear solving method, linear solving algorithm, domain decomposition and parallel computation. We first discuss some parallel preconditioning techniques, and then construct an integrated preconditioner, which is based on large scale distributed parallel processing, and reservoir simulation-oriented. The infrastructure of this preconditioner contains such famous preconditioning construction techniques as coarse grid correction, constraint residual correction and subspace projection correction. We essentially use multi-step means to integrate totally eight types of preconditioning components in order to give out the final preconditioner. Million-grid cell scale industrial reservoir data were tested on native high performance computers. Numerical statistics and analyses show that this preconditioner achieves satisfying parallel efficiency and acceleration effect.
Bonus algorithm for large scale stochastic nonlinear programming problems
Diwekar, Urmila
2015-01-01
This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...
Scale-Aware Pansharpening Algorithm for Agricultural Fragmented Landscapes
Directory of Open Access Journals (Sweden)
Mario Lillo-Saavedra
2016-10-01
Full Text Available Remote sensing (RS has played an important role in extensive agricultural monitoring and management for several decades. However, the current spatial resolution of satellite imagery does not have enough definition to generalize its use in highly-fragmented agricultural landscapes, which represents a significant percentage of the world’s total cultivated surface. To characterize and analyze this type of landscape, multispectral (MS images with high and very high spatial resolutions are required. Multi-source image fusion algorithms are normally used to improve the spatial resolution of images with a medium spatial resolution. In particular, pansharpening (PS methods allow one to produce high-resolution MS images through a coherent integration of spatial details from a panchromatic (PAN image with spectral information from an MS. The spectral and spatial quality of source images must be preserved to be useful in RS tasks. Different PS strategies provide different trade-offs between the spectral and the spatial quality of the fused images. Considering that agricultural landscape images contain many levels of significant structures and edges, the PS algorithms based on filtering processes must be scale-aware and able to remove different levels of detail in any input images. In this work, a new PS methodology based on a rolling guidance filter (RGF is proposed. The main contribution of this new methodology is to produce artifact-free pansharpened images, improving the MS edges with a scale-aware approach. Three images have been used, and more than 150 experiments were carried out. An objective comparison with widely-used methodologies shows the capability of the proposed method as a powerful tool to obtain pansharpened images preserving the spatial and spectral information.
Takayasu, M; Minervini, J V; 10.1109/TASC.2003.812854
2003-01-01
Voltage-current characteristics of mechanical pressure contact junctions between superconducting wires are investigated using a voltage-driving method. It is found that the switching regions at low voltages result from negative resistance of the contact junction. The current transport of the contact junctions is discussed from the perspective of two existing models: the multiple Andreev reflections at the two SN interfaces of a SNS (Superconductor/Normal metal /Superconductor) junction and the inhomogeneous multiple Josephson weak-link array. (13 refs).
Systems and methods for process and user driven dynamic voltage and frequency scaling
Mallik, Arindam; Lin, Bin; Memik, Gokhan; Dinda, Peter; Dick, Robert
2011-03-22
Certain embodiments of the present invention provide a method for power management including determining at least one of an operating frequency and an operating voltage for a processor and configuring the processor based on the determined at least one of the operating frequency and the operating voltage. The operating frequency is determined based at least in part on direct user input. The operating voltage is determined based at least in part on an individual profile for processor.
Magdǎu, I.-B.; Liu, X.-H.; Kuroda, M. A.; Shaw, T. M.; Crain, J.; Solomon, P. M.; Newns, D. M.; Martyna, G. J.
2015-08-01
The piezoelectronic transduction switch is a device with potential as a post-CMOS transistor due to its predicted multi-GHz, low voltage performance on the VLSI-scale. However, the operating principle of the switch has wider applicability. We use theory and simulation to optimize the device across a wide range of length scales and application spaces and to understand the physics underlying its behavior. We show that the four-terminal VLSI-scale switch can operate at a line voltage of 115 mV while as a low voltage-large area device, ≈200 mV operation at clock speeds of ≈2 GHz can be achieved with a desirable 104 On/Off ratio—ideal for on-board computing in sensors. At yet larger scales, the device is predicted to operate as a fast (≈250 ps) radio frequency (RF) switch exhibiting high cyclability, low On resistance and low Off capacitance, resulting in a robust switch with a RF figure of merit of ≈4 fs. These performance benchmarks cannot be approached with CMOS which has reached fundamental limits. In detail, a combination of finite element modeling and ab initio calculations enables prediction of switching voltages for a given design. A multivariate search method then establishes a set of physics-based design rules, discovering the key factors for each application. The results demonstrate that the piezoelectronic transduction switch can offer fast, low power applications spanning several domains of the information technology infrastructure.
Directory of Open Access Journals (Sweden)
A. Daniyel Raj
2015-03-01
Full Text Available During many decades, continuous device performance improvement has been made possible only through device scaling. But presently, due to aggressive scaling at the sub-micron or nanometer region, the conventional planner silicon technology is suffering from the fundamental physical limits. Such imposed limits on further downscaling of silicon planner technology have lead to alternative device technology like Silicon-On-Insulator (SOI technology. Due-to some of its inherent advantages, the Silicon-On-Insulator (SOI technology has reduced the Short-channel-effects (SCEs and thus increased transistor scalability. Till now, intense research interests have been paid in practical fabrication and theoretical modeling of SOI MOSFETs but a little attention has been paid to understand the circuit level performance improvement with nano-scale SOI MOSFETs. The circuit level performance analysis of SOI MOSFET is highly essential to understand the impact of SOI technology on next level VLSI circuit and chip design and for doing so device compact models are high on demand. In such scenario, under present research, a physics based compact device model of SOI MOSFET has been developed. At the first phase of the compact model development, a physics based threshold voltage model has been developed by solving 2-D Poisson’s equation at the channel region and at the second phase, a current-voltage model has been developed with drift-diffusion analysis. Different SCEs, valid at nano-scale, are effectively incorporated in threshold voltage and Current-Voltage model. At the third phase, using the compact model, the Voltage Transfer Characteristics (VTC for a nano-scale SOI CMOS inverter has been derived with graphical analysis. The impacts of different device parameters e.g.; channel length and channel doping concentration on VTC has been investigated through simulation and the results have been analyzed.
Nonomura, Yoshihiko
2014-11-01
Nonequilibrium relaxation behaviors in the Ising model on a square lattice based on the Wolff algorithm are totally different from those based on local-update algorithms. In particular, the critical relaxation is described by the stretched-exponential decay. We propose a novel scaling procedure to connect nonequilibrium and equilibrium behaviors continuously, and find that the stretched-exponential scaling region in the Wolff algorithm is as wide as the power-law scaling region in local-update algorithms. We also find that relaxation to the spontaneous magnetization in the ordered phase is characterized by the exponential decay, not the stretched-exponential decay based on local-update algorithms.
Lee, Byung-Hyun; Moon, Dong-Il; Jang, HyunJae; Kim, Chang-Hoon; Seol, Myeong-Lok; Choi, Ji-Min; Lee, Dong-Il; Kim, Min-Wu; Yoon, Jun-Bo; Choi, Yang-Kyu
2014-07-21
A mechanical and electrical transistor structure (METS) is proposed for effective voltage scaling. The sub-2 nm nanogap by atomic layer deposition (ALD) without stiction and the application of a dielectric with high-permittivity allowed the pull-in voltage of sub-2 V, showing the strength of the mechanical actuation that is hard to realize in a typical complementary metal-oxide-semiconductor (CMOS) transistor. The results are verified by simulation and interpreted by the numerical equation. Therefore the METS can pave a new way to make a breakthrough to overcome the limits of CMOS technology.
Directory of Open Access Journals (Sweden)
Byung Eun Lee
2014-09-01
Full Text Available This paper proposes an algorithm for fault detection, faulted phase and winding identification of a three-winding power transformer based on the induced voltages in the electrical power system. The ratio of the induced voltages of the primary-secondary, primary-tertiary and secondary-tertiary windings is the same as the corresponding turns ratio during normal operating conditions, magnetic inrush, and over-excitation. It differs from the turns ratio during an internal fault. For a single phase and a three-phase power transformer with wye-connected windings, the induced voltages of each pair of windings are estimated. For a three-phase power transformer with delta-connected windings, the induced voltage differences are estimated to use the line currents, because the delta winding currents are practically unavailable. Six detectors are suggested for fault detection. An additional three detectors and a rule for faulted phase and winding identification are presented as well. The proposed algorithm can not only detect an internal fault, but also identify the faulted phase and winding of a three-winding power transformer. The various test results with Electromagnetic Transients Program (EMTP-generated data show that the proposed algorithm successfully discriminates internal faults from normal operating conditions including magnetic inrush and over-excitation. This paper concludes by implementing the algorithm into a prototype relay based on a digital signal processor.
Thermal instability and current-voltage scaling in superconducting fault current limiters
Energy Technology Data Exchange (ETDEWEB)
Zeimetz, B [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Tadinada, K [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Eves, D E [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Coombs, T A [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Evetts, J E [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Campbell, A M [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom)
2004-04-01
We have developed a computer model for the simulation of resistive superconducting fault current limiters in three dimensions. The program calculates the electromagnetic and thermal response of a superconductor to a time-dependent overload voltage, with different possible cooling conditions for the surfaces, and locally variable superconducting and thermal properties. We find that the cryogen boil-off parameters critically influence the stability of a limiter. The recovery time after a fault increases strongly with thickness. Above a critical thickness, the temperature is unstable even for a small applied AC voltage. The maximum voltage and maximum current during a short fault are correlated by a simple exponential law.
Voltage stability issues for a benchmark grid model including large scale wind power
DEFF Research Database (Denmark)
Eek, Jarle; Lund, Torsten; Di Marzio, Guiseppe
2006-01-01
The objective of the paper is to investigate how the voltage stability of a relatively weak network after a grid fault is affected by the connection of a large wind park. A theoretical discussion of the stationary and dynamic characteristics of the Short Circuit Induction Generator and the Doubly...... of the network and saturation of the external reactive power compensation units provides a good basis for evaluation of the voltage stability. For the DFIG it is concluded that the speed stability limit is mainly determined by the voltage limitation of the rotor converter...
Analysis of Community Detection Algorithms for Large Scale Cyber Networks
Energy Technology Data Exchange (ETDEWEB)
Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee; Mackey, Patrick S.; Springer, John
2016-09-30
The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The next section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.
Automatic Cloud Resource Scaling Algorithm based on Long Short-Term Memory Recurrent Neural Network
National Research Council Canada - National Science Library
Ashraf A. Shahin
2016-01-01
.... This paper has proposed dynamic threshold based auto-scaling algorithms that predict required resources using Long Short-Term Memory Recurrent Neural Network and auto-scale virtual resources based on predicted values...
Algorithm of search and track of static and moving large-scale objects
Directory of Open Access Journals (Sweden)
Kalyaev Anatoly
2017-01-01
Full Text Available We suggest an algorithm for processing of a sequence, which contains images of search and track of static and moving large-scale objects. The possible software implementation of the algorithm, based on multithread CUDA processing, is suggested. Experimental analysis of the suggested algorithm implementation is performed.
Directory of Open Access Journals (Sweden)
M. Christobel
2015-01-01
Full Text Available One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO algorithm based on the particle’s best position (pbDPSO and global best position (gbDPSO is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF and First Come First Serve (FCFS algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER and Dynamic Voltage Scaling (DVS were used in the proposed DPSO algorithm.
Detection of Voltage Disturbance Based on Kalman Filter Algorithm%基于卡尔曼滤波的电压扰动检测算法
Institute of Scientific and Technical Information of China (English)
任文琳; 赵庆生; 何志方
2012-01-01
In order to real-time detect voltage disturbance in power system, the algorithm based on Kalman filter is presented. It applies a new Kalman model to calculate the effective value of voltage signal, which can achieve real-time tracking of the voltage sag and swell by setting the threshold of voltage RMS deviation with comparative analysis of sliding window RMS algorithm. The simulation results prove the real-time function and reliability of Kalman filter algorithm.%为了对电力系统中的电压扰动进行实时监测,提出了一种基于卡尔曼滤波的电压挠动检测算法,该算法采用新的卡尔曼模型求取电网电压信号的有效值,通过设定电压阈值进而实现电压凹陷和电压凸起波形的实时跟踪,并与滑动窗有效值(RMS)算法进行比较分析,仿真验证了该算法的实时性和可靠性.
Directory of Open Access Journals (Sweden)
Abbas v Y.A. Bayati
2010-01-01
Full Text Available Problem statement: The scaled hybrid Conjugate Gradient (CG algorithm which usually used for solving non-linear functions was presented and was compared with two standard well-Known NAG routines, yielding a new fast comparable algorithm. Approach: We proposed, a new hybrid technique based on the combination of two well-known scaled (CG formulas for the quadratic model in unconstrained optimization using exact line searches. A global convergence result for the new technique was proved, when the Wolfe line search conditions were used. Results: Computational results, for a set consisting of 1915 combinations of (unconstrained optimization test problems/dimensions were implemented in this research making a comparison between the new proposed algorithm and the other two similar algorithms in this field. Conclusion: Our numerical results showed that this new scaled hybrid CG-algorithm substantially outperforms Andrei-sufficient descent condition (CGSD algorithm and the well-known Andrei standard sufficient descent condition from (ACGA algorithm.
Institute of Scientific and Technical Information of China (English)
赵晋利; 胡红萍; 李权
2015-01-01
针对变电压投影数据中的不完全投影数据重建问题，提出基于投影数据线性融合的变电压CT重建算法。该算法根据最佳灰度带提取递变电压投影序列中的有效信息，利用线性融合的方法，把低电压下的投影数据通过线性关系融合到相邻高电压投影数据上，依此类推直至最高电压，获得一幅完整信息的高动态投影数据，并利用TV－ART算法重建。实验表明：线性融合算法不仅实现了变电压图像信息的完整重建，像素值也更加稳定。%To solve the problem of the incomplete projection data reconstruction in variable voltage , linear fused CT reconstruction algorithm based on variable voltage has advanced .The algorithm firstly extract the projective sequences according to the optimal gray belt , then the projection data under low voltage is linear fused to the adjacent high voltage projection data , and so on , until the highest voltage for a complete projection information , and use the TV -ART algorithm to reconstruct it .Experiment shows that the proposed algorithm can complete-ly reflect the information of a complicated structural component , and the pixel values are more stable .
Distributed algorithm for controlling scaled-free polygonal formations
Garcia de Marina Peinado, Hector; Jayawardhana, Bayu; Cao, Ming
2017-01-01
This paper presents a distributed algorithm for controlling the deployment of a team of agents in order to form a broad class of polygons, including regular ones, where each agent occupies a corner of the polygon. The algorithm shares the properties from the popular distance- based rigid formation c
DEFF Research Database (Denmark)
Seyyed Sakha, Masoud; Shaker, Hamid Reza
2017-01-01
expensive. The computational burden is significant in particular for large-scale systems. In this paper, we develop a new technique for placing sensor and actuator in large-scale systems by using Restricted Genetic Algorithm (RGA). The RGA is a kind of genetic algorithm which is developed specifically...
Martín, Víctor; Vale, Carmen; Rubiolo, Juan A; Roel, Maria; Hirama, Masahiro; Yamashita, Shuji; Vieytes, Mercedes R; Botana, Luís M
2015-06-15
Ciguatoxins are sodium channels activators that cause ciguatera, one of the most widespread nonbacterial forms of food poisoning, which presents with long-term neurological alterations. In central neurons, chronic perturbations in activity induce homeostatic synaptic mechanisms that adjust the strength of excitatory synapses and modulate glutamate receptor expression in order to stabilize the overall activity. Immediate early genes, such as Arc and Egr1, are induced in response to activity changes and underlie the trafficking of glutamate receptors during neuronal homeostasis. To better understand the long lasting neurological consequences of ciguatera, it is important to establish the role that chronic changes in activity produced by ciguatoxins represent to central neurons. Here, the effect of a 30 min exposure of 10-13 days in vitro (DIV) cortical neurons to the synthetic ciguatoxin CTX 3C on Arc and Egr1 expression was evaluated using real-time polymerase chain reaction approaches. Since the toxin increased the mRNA levels of both Arc and Egr1, the effect of CTX 3C in NaV channels, membrane potential, firing activity, miniature excitatory postsynaptic currents (mEPSCs), and glutamate receptors expression in cortical neurons after a 24 h exposure was evaluated using electrophysiological and western blot approaches. The data presented here show that CTX 3C induced an upregulation of Arc and Egr1 that was prevented by previous coincubation of the neurons with the NaV channel blocker tetrodotoxin. In addition, chronic CTX 3C caused a concentration-dependent shift in the activation voltage of NaV channels to more negative potentials and produced membrane potential depolarization. Moreover, 24 h treatment of cortical neurons with 5 nM CTX 3C decreased neuronal firing and induced synaptic scaling mechanisms, as evidenced by a decrease in the amplitude of mEPSCs and downregulation in the protein level of glutamate receptors that was also prevented by tetrodotoxin
High voltage stability of LiCoO2 particles with a nano-scale Lipon coating
Energy Technology Data Exchange (ETDEWEB)
Kim, Yoongu [ORNL; Veith, Gabriel M [ORNL; Nanda, Jagjit [ORNL; Unocic, Raymond R [ORNL; Dudney, Nancy J [ORNL
2011-01-01
For high-voltage cycling of rechargeable Li batteries, a nano-scale amorphous Li-ion conductor, lithium phosphorus oxynitride (Lipon), has been coated on surfaces of LiCoO{sub 2} particles by combining a RF-magnetron sputtering technique and mechanical agitation of LiCoO{sub 2} powders. LiCoO{sub 2} particles coated with 0.36 wt% ({approx}1 nm thick) of the amorphous Lipon, retain 90% of their original capacity compared to non-coated cathode materials that retain only 65% of their original capacity after more than 40 cycles in the 3.0-4.4 V range with a standard carbonate electrolyte. The reason for the better high-voltage cycling behavior is attributed to reduction in the side reactions that cause increase of the cell resistance during cycling. Further, Lipon coated particles are not damaged, whereas uncoated particles are badly cracked after cycling. Extending the charge of Lipon-coated LiCoO{sub 2} to higher voltage enhances the specific capacity, but more importantly the Lipon-coated material is also more stable and tolerant of high voltage excursions. A drawback of Lipon coating, particularly as thicker films are applied to cathode powders, is the increased electronic resistance that reduces the power performance.
Samadi, A.
2014-01-01
Long term supporting schemes for photovoltaic (PV) system installation have led to accommodating large numbers of PV systems within load pockets in distribution grids. High penetrations of PV systems can cause new technical challenges, such as voltage rise due to reverse power flow during light load
DEFF Research Database (Denmark)
Qu, Hao; Yang, Xijun; Guo, Yougui
2014-01-01
-sequence component injection, in order to reduce power loss and increased overall efficiency. And then by reconstructing the other two phase input voltages and currents, the transformation from stationary frame (abc) to rotating frame (dq frame) is designed. Finally, a PI regulator based controller for single......Single-phase voltage source converter (VSC) is an important power electronic converter (PEC), including single-phase voltage source inverter (VSI), single-phase voltage source rectifier (VSR), single-phase active power filter (APF) and single-phase grid-connection inverter (GCI). Single-phase VSC...
DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling
Calcavecchia, Nicolo M; Di Nitto, Elisabetta; Dubois, Daniel J; Petcu, Dana
2012-01-01
The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers' solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorit...
An Improved Genetic Algorithm for the Large-Scale Rural Highway Network Layout
Directory of Open Access Journals (Sweden)
Changxi Ma
2014-01-01
Full Text Available For the layout problem of rural highway network, which is often characterized by a cluster of geographically dispersed nodes, neither the Prim algorithm nor the Kruskal algorithm can be readily applied, because the calculating speed and accuracy are by no means satisfactory. Rather than these two polynomial algorithms and the traditional genetic algorithm, this paper proposes an improved genetic algorithm. It encodes the minimum spanning trees of large-scale rural highway network layout with Prufer array, a method which can reduce the length of chromosome; it decodes Prufer array by using an efficient algorithm with time complexity o(n and adopting the single transposition method and orthoposition exchange method, substitutes for traditional crossover and mutation operations, which can effectively overcome the prematurity of genetic algorithm. Computer simulation tests and case study confirm that the improved genetic algorithm is better than the traditional one.
A hybrid genetic algorithm based on mutative scale chaos optimization strategy
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In order to avoid such problems as low convergent speed and local optimal solution in simple genetic algorithms, a new hybrid genetic algorithm is proposed. In this algorithm, a mutative scale chaos optimization strategy is operated on the population after a genetic operation. And according to the searching process, the searching space of the optimal variables is gradually diminished and the regulating coefficient of the secondary searching process is gradually changed which will lead to the quick evolution of the population. The algorithm has such advantages as fast search, precise results and convenient using etc. The simulation results show that the performance of the method is better than that of simple genetic algorithms.
Scaling algorithms for the calculation of solar radiative fluxes
Energy Technology Data Exchange (ETDEWEB)
Suzuki, Tsuneaki [Frontier Research Center for Global Change, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama, Kanagawa 236-0001 (Japan)], E-mail: tsuneaki@jamstec.go.jp; Nakajima, Teruyuki [Center for Climate System Research, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8568 (Japan); Tanaka, Masayuki [Department of Environmental Information Engineering, Tohoku Institute of Technology, 35-1 Kasumi-cho, Yagiyama, Taihaku-ku, Sendai, Miyagi 982-8577 (Japan)
2007-10-15
We derived new scaling formulae based on the method of successive orders of scattering to calculate solar radiative flux. In this report, we demonstrate a multiple scaling method, in which we introduce scaling factors for each scattering order independently. The formula of radiative transfer by the method of successive orders of scattering cannot be solved rapidly except in the case of optically thin atmospheres. Then we further derived a double scaling method, which scales the ordinary radiative transfer equation by two scaling factors. We applied the double scaling method to two-stream and four-stream approximations of the discrete ordinates method. Comparing the results of the double scaling method with those of the delta-M method, we found that the double scaling method improved the accuracy of radiative fluxes at large solar zenith angles, especially in the optically thin region, and that in the region where multiple scattering dominates, its accuracy was comparable to that of the delta-M method. Once we determined the scaling factors appropriately, the double scaling method calculated radiative fluxes as rapidly as the delta-M method in the two-stream and four-stream approximations. This method, therefore, is useful for accurate computation of solar radiative fluxes in general circulation models.
DEFF Research Database (Denmark)
Zhao, Zhuoli; Yang, Ping; Guerrero, Josep M.
2016-01-01
In this paper, an islanded medium-voltage (MV) microgrid placed in Dongao Island is presented, which integrates renewable-energy-based distributed generations (DGs), energy storage system (ESS), and local loads. In an isolated microgrid without connection to the main grid to support the frequency...... of Zone B. Theoretical analysis, time-domain simulation and field test results under various conditions and scenarios in the Dongao Island microgrid are presented to prove the validity of the introduced control strategy....
Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm
Energy Technology Data Exchange (ETDEWEB)
Yao, Y
2008-02-08
Semantic graphs have become key components in analyzing complex systems such as the Internet, or biological and social networks. These types of graphs generally consist of sparsely connected clusters or 'communities' whose nodes are more densely connected to each other than to other nodes in the graph. The identification of these communities is invaluable in facilitating the visualization, understanding, and analysis of large graphs by producing subgraphs of related data whose interrelationships can be readily characterized. Unfortunately, the ability of LLNL to effectively analyze the terabytes of multisource data at its disposal has remained elusive, since existing decomposition algorithms become computationally prohibitive for graphs of this size. We have addressed this limitation by developing more efficient algorithms for discerning community structure that can effectively process massive graphs. Current algorithms for detecting community structure, such as the high quality algorithm developed by Girvan and Newman [1], are only capable of processing relatively small graphs. The cubic complexity of Girvan and Newman, for example, makes it impractical for graphs with more than approximately 10{sup 4} nodes. Our goal for this project was to develop methodologies and corresponding algorithms capable of effectively processing graphs with up to 10{sup 9} nodes. From a practical standpoint, we expect the developed scalable algorithms to help resolve a variety of operational issues associated with the productive use of semantic graphs at LLNL. During FY07, we completed a graph clustering implementation that leverages a dynamic graph transformation to more efficiently decompose large graphs. In essence, our approach dynamically transforms the graph (or subgraphs) into a tree structure consisting of biconnected components interconnected by bridge links. This isomorphism allows us to compute edge betweenness, the chief source of inefficiency in Girvan and
Reduced Complexity Divide and Conquer Algorithm for Large Scale TSPs
Directory of Open Access Journals (Sweden)
Hoda A. Darwish
2014-01-01
Full Text Available The Traveling Salesman Problem (TSP is the problem of finding the shortest path passing through all given cities while only passing by each city once and finishing at the same starting city. This problem has NP-hard complexity making it extremely impractical to get the most optimal path even for problems as small as 20 cities since the number of permutations becomes too high. Many heuristic methods have been devised to reach “good” solutions in reasonable time. In this paper, we present the idea of utilizing a spatial “geographical” Divide and Conquer technique in conjunction with heuristic TSP algorithms specifically the Nearest Neighbor 2-opt algorithm. We have found that the proposed algorithm has lower complexity than algorithms published in the literature. This comes at a lower accuracy expense of around 9%. It is our belief that the presented approach will be welcomed to the community especially for large problems where a reasonable solution could be reached in a fraction of the time.
Optimization algorithm for capacitor voltage balance of multilevel inverters%多电平逆变器通用电容电压平衡优化算法
Institute of Scientific and Technical Information of China (English)
李俊杰; 姜建国; 戴鹏; 乔树通
2015-01-01
针对二极管箝位型多电平逆变器电容电压难以控制的问题，分析了一种简单的多电平逆变器等效模型，用于预测结点电压偏差，提出了一种多电平逆变器电容电压平衡优化SVPWM ( space vector pulse width modulation)算法。该算法通过预测不同开关状态下直流侧结点电压偏差，建立目标函数并对其寻优，在每个开关周期选取最优的开关组合达到结点电压平衡。理论分析和试验结果表明，该算法适用于任意电平逆变器电容电压平衡的控制，解决了三电平逆变器电容电压平衡的问题，但在三电平以上的逆变器受调制度限制。针对三电平以上高调制度下电容电压失衡的原因进行了分析，并给出了解决方法。仿真和实验验证了算法的正确性。%Capacitor voltages of diode clamped multilevel inverters are hard to control. In order to solve this problem,a simple equivalent circuit of multilevel inverters to predict the deviation of node voltages was analyzed, and an optimization algorithm of space vector pulse width modulation( SVPWM) for capaci-tor voltage balance of diode clamped multilevel inverters was presented. This algorithm predicts the devia-tion of node voltages for different operation modes to develop an optimization algorithm to minimize the voltage deviation among different capacitors. In this algorithm the best switch combinations were selected during each switching cycle to reach node voltage balance. Theoretical analysis and experimental results show this algorithm is naturally applicable to inverters with an arbitrary number of levels. Capacitor volta-ges of three-level inverter are controlled, but more than three-level inverter system is restricted by the modulation. For more than three-level system, the cause of capacitor voltage imbalance under high modu-lation was analyzed, and a solution was presented. Simulation and experimental results verify the correct-ness of this
Directory of Open Access Journals (Sweden)
Hui He
2013-01-01
Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
A Scale and Pose Invariant Algorithm for Fast Detecting Human Faces in a Complex Background
Institute of Scientific and Technical Information of China (English)
XING Xin; SHEN Lansun; JIA Kebin
2001-01-01
Human face detection is an interesting and challenging task in computer vision. A scale and pose invariant algorithm is proposed in this paper.The algorithm is able to detect human faces in a complex background in about 400ms with a detection rate of 92%. The algorithm can be used in a wide range of applications such as human-computer interface, video coding, etc.
Multidimensional Scaling and Genetic Algorithms : A Solution Approach to Avoid Local Minima
Etschberger, Stefan; Hilbert, Andreas
2002-01-01
Multidimensional scaling is very common in exploratory data analysis. It is mainly used to represent sets of objects with respect to their proximities in a low dimensional Euclidean space. Widely used optimization algorithms try to improve the representation via shifting its coordinates in direction of the negative gradient of a corresponding fit function. Depending on the initial configuration, the chosen algorithm and its parameter settings there is a possibility for the algorithm to termin...
Efficient algorithms for large-scale quantum transport calculations
Brück, Sascha; Calderara, Mauro; Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost; Luisier, Mathieu
2017-08-01
Massively parallel algorithms are presented in this paper to reduce the computational burden associated with quantum transport simulations from first-principles. The power of modern hybrid computer architectures is harvested in order to determine the open boundary conditions that connect the simulation domain with its environment and to solve the resulting Schrödinger equation. While the former operation takes the form of an eigenvalue problem that is solved by a contour integration technique on the available central processing units (CPUs), the latter can be cast into a linear system of equations that is simultaneously processed by SplitSolve, a two-step algorithm, on general-purpose graphics processing units (GPUs). A significant decrease of the computational time by up to two orders of magnitude is obtained as compared to standard solution methods.
Localized density matrix minimization and linear scaling algorithms
Lai, Rongjie
2015-01-01
We propose a convex variational approach to compute localized density matrices for both zero temperature and finite temperature cases, by adding an entry-wise $\\ell_1$ regularization to the free energy of the quantum system. Based on the fact that the density matrix decays exponential away from the diagonal for insulating system or system at finite temperature, the proposed $\\ell_1$ regularized variational method provides a nice way to approximate the original quantum system. We provide theoretical analysis of the approximation behavior and also design convergence guaranteed numerical algorithms based on Bregman iteration. More importantly, the $\\ell_1$ regularized system naturally leads to localized density matrices with banded structure, which enables us to develop approximating algorithms to find the localized density matrices with computation cost linearly dependent on the problem size.
AN ADAPTIVE DIGITAL IMAGE WATERMARK ALGORITHM BASED ON GRAY-SCALE MORPHOLOGY
Institute of Scientific and Technical Information of China (English)
Tong Ming; Hu Jia; Ji Hongbing
2009-01-01
An adaptive digital image watermark algorithm with strong robustness based on gray-scale morphology is proposed in this paper.The embedded strategies include:The algorithm seeks and extracts adaptively the image strong texture regions.The algorithm maps the image strong texture region to the wavelet tree structures,and embeds adaptively watermark into the wavelet coefficients corresponding to the image's strong texture regions.According to the visual masking features,the algorithm adjusts adaptively the watermark-embedding intensity.Experimental results show the algorithm is robust to compression,filtering,noise as well as strong shear attacks.The algorithm is blind watermark scheme.The image strong texture region extraction method based on morphology in this algorithm is simple and effective and adaptive to various images.
A trust-region and affine scaling algorithm for linearly constrained optimization
Institute of Scientific and Technical Information of China (English)
陈中文; 章祥荪
2002-01-01
A new trust-region and affine scaling algorithm for linearly constrained optimization is presentedin this paper. Under no nondegenerate assumption, we prove that any limit point of the sequence generatedby the new algorithm satisfies the first order necessary condition and there exists at least one limit point ofthe sequence which satisfies the second order necessary condition. Some preliminary numerical experiments are reported.
ALGORITHM FOR DYNAMIC SCALING RELATIONAL DATABASE IN CLOUDS
Alexander V. Boichenko; Dminry K. Rogojin; Dmitry G. Korneev
2014-01-01
This article analyzes the main methods of scalingdatabases (replication, sharding) and their supportat the popular relational databases and NoSQLsolutions with different data models: document-oriented, key-value, column-oriented and graph.The article presents an algorithm for the dynamicscaling of a relational database (DB), that takesinto account the speciﬁcs of the different types of logic database model. This article was prepared with the support of RFBR (grant № 13-07-00749).
ALGORITHM FOR DYNAMIC SCALING RELATIONAL DATABASE IN CLOUDS
Directory of Open Access Journals (Sweden)
Alexander V. Boichenko
2014-01-01
Full Text Available This article analyzes the main methods of scalingdatabases (replication, sharding and their supportat the popular relational databases and NoSQLsolutions with different data models: document-oriented, key-value, column-oriented and graph.The article presents an algorithm for the dynamicscaling of a relational database (DB, that takesinto account the speciﬁcs of the different types of logic database model. This article was prepared with the support of RFBR (grant № 13-07-00749.
Energy Technology Data Exchange (ETDEWEB)
Lytvynenko, Ia.M. [Sumy State University, 2, Rimskogo-Korsakova Str., 40007 Sumy (Ukraine); Hauet, T., E-mail: thomas.hauet@univ-lorraine.fr [Institut Jean Lamour, UMR CNRS 7198, Université de Lorraine, 54506 Vandoeuvre les Nancy (France); Montaigne, F. [Institut Jean Lamour, UMR CNRS 7198, Université de Lorraine, 54506 Vandoeuvre les Nancy (France); Bibyk, V.V. [Sumy State University, 2, Rimskogo-Korsakova Str., 40007 Sumy (Ukraine); Andrieu, S. [Institut Jean Lamour, UMR CNRS 7198, Université de Lorraine, 54506 Vandoeuvre les Nancy (France)
2015-12-15
Interplay between voltage-induced magnetic anisotropy transition and voltage-induced atomic diffusion is studied in epitaxial V/Fe (0.7 nm)/ MgO/ Fe(5 nm)/Co/Au magnetic tunnel junction where thin Fe soft electrode has in-plane or out-of-plane anisotropy depending on the sign of the bias voltage. We investigate the origin of the slow resistance variation occurring when switching bias voltage in opposite polarity. We demonstrate that the time to reach resistance stability after voltage switching is reduced when increasing the voltage amplitude or the temperature. A single energy barrier of about 0.2 eV height is deduced from temperature dependence. Finally, we demonstrate that the resistance change is not correlated to a change in soft electrode anisotropy. This conclusion contrasts with observations recently reported on analogous systems. - Highlights: • Voltage-induced time dependence of resistance is studied in epitaxial Fe/MgO/Fe. • Resistance change is not related to the bottom Fe/MgO interface. • The effect is thermally activated with an energy barrier of the order of 0.2 eV height.
Topology Management Algorithms for Large Scale Aerial High Capacity Directional Networks
2016-11-01
Introduction of classes of topology management algo- rithms and example implementations of each • Performance evaluation of the algorithms in 2 example relevant...Topology Management Algorithms for Large-Scale Aerial High Capacity Directional Networks Joy Wang, Thomas Shake, Patricia Deutsch, Andrea Coyle, Bow...airborne backbone network is large- scale topology management of directional links in a dynamic environment. In this paper, we present several
Gray Weighted CT Reconstruction Algorithm Based on Variable Voltage%基于灰度加权的变电压CT重建算法
Institute of Scientific and Technical Information of China (English)
李权; 陈平; 潘晋孝
2014-01-01
In conventional CT reconstruction based on fixed Voltage, the projective data often appears overex-posed or underexposed, and so the reconstructive results are poor.To solve this problem, variable voltage CT reconstruction has advanced.The effective projective sequences of a structural component are obtained through the variable voltages.Adjust and minimize the total variation to optimize the reconstructive results on the basis of iterative image using ART algorithm.In the process of reconstruction, the reconstructive image of the low voltage is used as an initial value of the effective projective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm.That is to say the complete structural information is reconstructed.Experiment shows that the proposed algorithm can completely reflect the informa-tion of a complicated structural component, and the pixel values are more stable.%常规固定电压CT重建，由于过曝光和欠曝光导致的不完全投影信息，成像质量差，为此提出变电压CT重建。通过变电压获得跟工件有效厚度相匹配的有效投影序列，在ART迭代图像的基础上，调整全变差使其最小化，来优化重建。在重建过程中，依据灰度加权，把低电压的重建图像作为初值，应用在相邻高电压有效投影重建中，得到相邻高电压的重建图像，依次类推直至最高电压，工件的全部结构信息重建完毕。实验表明，灰度加权算法不仅实现了变电压图像信息的完整重建，像素值也更加稳定。
HMC algorithm with multiple time scale integration and mass preconditioning
Jansen, K; Urbach, C; Wenger, U
2005-01-01
We describe a new HMC algorithm variant we have recently introduced and extend the published results by preliminary results of a simulation with a pseudo scalar mass value of about 300 MeV. This new run confirms our expectation that simulations with such pseudo scalar mass values become feasible and affordable with our HMC variant. In addition we discuss simulations from hot and cold starts at a pseudo scalar mass value of about 300 MeV, which we performed in order to test for possible meta-stabilities.
Fernández, Michael; Fernández, Leyden; Abreu, Jose Ignacio; Garriga, Miguel
2008-06-01
Voltage-gated K(+) ion channels (VKCs) are membrane proteins that regulate the passage of potassium ions through membranes. This work reports a classification scheme of VKCs according to the signs of three electrophysiological variables: activation threshold voltage (V(t)), half-activation voltage (V(a50)) and half-inactivation voltage (V(h50)). A novel 3D pseudo-folding graph representation of protein sequences encoded the VKC sequences. Amino acid pseudo-folding 3D distances count (AAp3DC) descriptors, calculated from the Euclidean distances matrices (EDMs) were tested for building the classifiers. Genetic algorithm (GA)-optimized support vector machines (SVMs) with a radial basis function (RBF) kernel well discriminated between VKCs having negative and positive/zero V(t), V(a50) and V(h50) values with overall accuracies about 80, 90 and 86%, respectively, in crossvalidation test. We found contributions of the "pseudo-core" and "pseudo-surface" of the 3D pseudo-folded proteins to the discrimination between VKCs according to the three electrophysiological variables.
Focusing of Spotlight Tandem-Configuration Bistatic Data with Frequency Scaling Algorithm
Directory of Open Access Journals (Sweden)
Shichao Chen
2016-01-01
Full Text Available A frequency scaling (FS imaging algorithm is proposed for spotlight bistatic SAR data processing. Range cell migration correction (RCMC is realized through phase multiplication. The proposed algorithm is insensitive to the length of the baseline due to the high precision of the point target (PT spectrum that we are based on. It is capable of handling bistatic SAR data with a large baseline to range ratio. The algorithms suitable for small and high squint angles are both discussed according to whether the range dependence of the second range compression (SRC can be neglected or not. Simulated experiments validate the effectiveness of the proposed algorithm.
Institute of Scientific and Technical Information of China (English)
Chunxia Jia; Detong Zhu
2008-01-01
In this paper we propose an affine scaling interior algorithm via conjugate gradient path for solving nonlinear equality systems subject to bounds on variables.By employing the affine scaling conjugate gradient path search strategy,we obtain an iterative direction by solving the linearize model.By using the line search technique,we will find an acceptable trial step length along this direction which is strictly feasible and makes the objective function nonmonotonically decreasing.The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions.Furthermore,the numerical results of the proposed algorithm indicate to be effective.
A deﬁcit scaling algorithm for the minimum ﬂow problem
Indian Academy of Sciences (India)
Laura Ciupală
2006-06-01
In this paper, we develop a new preﬂow algorithm for the minimum ﬂow problem, called deﬁcit scaling algorithm. This is a special implementation of the generic preﬂow algorithm for the minimum ﬂow problem developed by Ciurea and Ciupală earlier. The bottleneck operation in the generic preﬂow algorithm is the number of noncancelling pulls. Using the scaling technique (i.e. selecting the active nodes with sufﬁciently large deﬁcits), we reduce the number of noncancelling pulls to $O(n^2 \\log \\bar{c})$ and obtain an $O(nm+n^2 \\log \\bar{c})$ algorithm.
Wafer-scale solution-derived molecular gate dielectrics for low-voltage graphene electronics
Energy Technology Data Exchange (ETDEWEB)
Sangwan, Vinod K.; Jariwala, Deep; McMorrow, Julian J.; He, Jianting; Lauhon, Lincoln J. [Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208 (United States); Everaerts, Ken [Department of Chemistry, Northwestern University, Evanston, Illinois 60208 (United States); Grayson, Matthew [Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, Illinois 60208 (United States); Marks, Tobin J., E-mail: t-marks@northwestern.edu, E-mail: m-hersam@northwestern.edu; Hersam, Mark C., E-mail: t-marks@northwestern.edu, E-mail: m-hersam@northwestern.edu [Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208 (United States); Department of Chemistry, Northwestern University, Evanston, Illinois 60208 (United States)
2014-02-24
Graphene field-effect transistors are integrated with solution-processed multilayer hybrid organic-inorganic self-assembled nanodielectrics (SANDs). The resulting devices exhibit low-operating voltage (2 V), negligible hysteresis, current saturation with intrinsic gain >1.0 in vacuum (pressure < 2 × 10{sup −5} Torr), and overall improved performance compared to control devices on conventional SiO{sub 2} gate dielectrics. Statistical analysis of the field-effect mobility and residual carrier concentration demonstrate high spatial uniformity of the dielectric interfacial properties and graphene transistor characteristics over full 3 in. wafers. This work thus establishes SANDs as an effective platform for large-area, high-performance graphene electronics.
Wafer-scale solution-derived molecular gate dielectrics for low-voltage graphene electronics
Sangwan, Vinod K.; Jariwala, Deep; Everaerts, Ken; McMorrow, Julian J.; He, Jianting; Grayson, Matthew; Lauhon, Lincoln J.; Marks, Tobin J.; Hersam, Mark C.
2014-02-01
Graphene field-effect transistors are integrated with solution-processed multilayer hybrid organic-inorganic self-assembled nanodielectrics (SANDs). The resulting devices exhibit low-operating voltage (2 V), negligible hysteresis, current saturation with intrinsic gain >1.0 in vacuum (pressure < 2 × 10-5 Torr), and overall improved performance compared to control devices on conventional SiO2 gate dielectrics. Statistical analysis of the field-effect mobility and residual carrier concentration demonstrate high spatial uniformity of the dielectric interfacial properties and graphene transistor characteristics over full 3 in. wafers. This work thus establishes SANDs as an effective platform for large-area, high-performance graphene electronics.
A Fast and Simple Algorithm for Detecting Large Scale Structures
Pillastrini, Giovanni C Baiesi
2013-01-01
Aims: we propose a gravitational potential method (GPM) as a supercluster finder based on the analysis of the local gravitational potential distribution measured by fast and simple algorithm applied to a spatial distribution of mass tracers. Methodology: the GPM performs a two-step exploratory data analysis: first, it measures the comoving local gravitational potential generated by neighboring mass tracers at the position of a test point-like mass tracer. The computation extended to all mass tracers of the sample provides a detailed map of the negative potential fluctuations. The most negative gravitational potential is provided by the highest mass density or, in other words, the deeper is a potential fluctuations in a certain region of space and denser are the mass tracers in that region. Therefore, from a smoothed potential distribution, the deepest potential well detects unambiguously a high concentration in the mass tracer distribution. Second, applying a density contrast criterion to that mass concentrat...
DEFF Research Database (Denmark)
Soliman, Hammam Abdelaal Hammam; Davari, Pooya; Wang, Huai
2017-01-01
monitoring methodologies are rarely adopted by industry due to shortcomings such as, low estimation accuracy, extra hardware, and increased cost. Therefore, development of new condition monitoring methodologies that are based on advanced software and requires no extra hardware could be more attractive......-link voltage are used as training data to the Artificial Neural Network. Fast Fourier Transform (FFT) of the dc-link voltage is analysed in order to study the impact of capacitance variation on the harmonics order. Laboratory experiments are conducted to validate the proposed methodology and the error analysis...
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
变电压 CT 重建的灰度加权算法%Gray weighted algorithm for variable voltage CT reconstruction
Institute of Scientific and Technical Information of China (English)
李权; 陈平; 潘晋孝
2014-01-01
In conventional computed tomography (CT) reconstruction based on fixed voltage ,the projective data often ap-pear overexposed or underexposed ,as a result ,the reconstructive results are poor .To solve this problem ,variable voltage CT reconstruction has been proposed .The effective projective sequences of a structural component are obtained through the variable voltage .The total variation is adjusted and minimized to optimize the reconstructive results on the basis of iterative image using algebraic reconstruction technique (ART) .In the process of reconstruction ,the reconstructive image of low voltage is used as an initial value of the effective projective reconstruction of the adjacent high voltage ,and so on until to the highest voltage according to the gray weighted algorithm .Thereby the complete structural information is reconstructed . Simulation results show that the proposed algorithm can completely reflect the information of a complicated structural com -ponent ,and the pixel values are more stable than those of the conventional .%常规固定电压的 CT 重建，因成像系统动态范围受限，投影数据易出现过曝光和欠曝光共存现象，造成信息缺失多，成像质量差，为此提出变电压 CT 重建。通过变电压获得跟工件有效厚度相匹配的有效投影序列，在 ART 迭代图像的基础上，调整全变差使其最小化，从而优化重建。在重建过程中，依据灰度加权，把低电压的重建图像作为初值，应用在相邻高电压有效投影重建中，得到相邻高电压的重建图像，依次类推直至最高电压。至此，工件的全部结构信息重建完毕。仿真结果表明，灰度加权算法不仅实现了变电压图像信息的完整重建，而且像素值更加稳定。
DEFF Research Database (Denmark)
Padmanaban, Sanjeevikumar; Grandi, Gabriele; Blaabjerg, Frede
2015-01-01
This paper considered a six-phase (asymmetrical) induction motor, kept 30 phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs) and all four dc sources are deliberately kept isolated......) by nearest three vectors (NTVs) approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/ Simulink-PLECS software) of whole ac drive system by observing the dynamic behaviors in different designed condition. Set...
Raskovic, Dejan; Giessel, David
2009-11-01
The goal of the study presented in this paper is to develop an embedded biomedical system capable of delivering maximum performance on demand, while maintaining the optimal energy efficiency whenever possible. Several hardware and software solutions are presented allowing the system to intelligently change the power supply voltage and frequency in runtime. The resulting system allows use of more energy-efficient components, operates most of the time in its most battery-efficient mode, and provides means to quickly change the operation mode while maintaining reliable performance. While all of these techniques extend battery life, the main benefit is on-demand availability of computational performance using a system that is not excessive. Biomedical applications, perhaps more than any other application, require battery operation, favor infrequent battery replacements, and can benefit from increased performance under certain conditions (e.g., when anomaly is detected) that makes them ideal candidates for this approach. In addition, if the system is a part of a body area network, it needs to be light, inexpensive, and adaptable enough to satisfy changing requirements of the other nodes in the network.
Betweenness-based algorithm for a partition scale-free graph
Institute of Scientific and Technical Information of China (English)
Zhang Bai-Da; Wu Jun-Jie; Tang Yu-Hua; Zhou Jing
2011-01-01
Many real-world networks are found to be scale-free.However,graph partition technology,as a technology capable of parallel computing,performs poorly when scale-free graphs are provided.The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks,rather than for scale-free networks.Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively.In this paper,we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm,top down partitioning,which derives its name from the comparison with the traditional bottom-up partitioning.A new multilevel partitioning algorithm,named betweenness-bnsed partitioning algorithm,is also presented as an implementation of top-down partitioning paradigm.An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches.
Directory of Open Access Journals (Sweden)
Sanjeevikumar Padmanaban
2015-09-01
Full Text Available This paper considered a six-phase (asymmetrical induction motor, kept 30° phase displacement between two set of three-phase open-end stator windings configuration. The drive system consists of four classical three-phase voltage inverters (VSIs and all four dc sources are deliberately kept isolated. Therefore, zero-sequence/homopolar current components cannot flow. The original and effective power sharing algorithm is proposed in this paper with three variables (degree of freedom based on synchronous field oriented control (FOC. A standard three-level space vector pulse width modulation (SVPWM by nearest three vectors (NTVs approach is adopted to regulate each couple of VSIs. The proposed power sharing algorithm is verified by complete numerical simulation modeling (Matlab/Simulink-PLECS software of whole ac drive system by observing the dynamic behaviors in different designed condition. Set of results are provided in this paper, which confirms a good agreement with theoretical development.
BFL: a node and edge betweenness based fast layout algorithm for large scale networks
Directory of Open Access Journals (Sweden)
Kojima Kaname
2009-01-01
Full Text Available Abstract Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL. BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2 when considering edge crossings, and to O(n log n when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer.
A new chirp scaling algorithm of bistatic SAR with parallel flight paths
Li, Ning; Wang, Luping
2011-10-01
The precise point target reference spectrum of bistatic SAR has been a difficult problem for a long time. Many of the current available algorithms have approximation during deducing. This paper deduces the precise expression in Doppler- Frequency domain with the configuration of parallel flight paths and constant velocity of each platform. Then a new chirp scaling algorithm is put forward. At last, simulations are given to demonstrate the good focusing performance.
Ergul, Ozgur
2014-01-01
The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red
Neelov, Alexey; Ghasemi, S Alireza; Goedecker, Stefan
2007-07-14
An algorithm for fast calculation of the Coulombic forces and energies of point particles with free boundary conditions is proposed. Its calculation time scales as N log N for N particles. This novel method has lower crossover point with the full O(N(2)) direct summation than the fast multipole method. The forces obtained by our algorithm are analytical derivatives of the energy which guarantees energy conservation during a molecular dynamics simulation. Our algorithm is very simple. A version of the code parallelized with the Message Passing Interface can be downloaded under the GNU General Public License from the website of our group.
Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta
2016-10-01
We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.
Nascov, Victor; Logofătu, Petre Cătălin
2009-08-01
We describe a fast computational algorithm able to evaluate the Rayleigh-Sommerfeld diffraction formula, based on a special formulation of the convolution theorem and the fast Fourier transform. What is new in our approach compared to other algorithms is the use of a more general type of convolution with a scale parameter, which allows for independent sampling intervals in the input and output computation windows. Comparison between the calculations made using our algorithm and direct numeric integration show a very good agreement, while the computation speed is increased by orders of magnitude.
Scaling up the DBSCAN Algorithm for Clustering Large Spatial Databases Based on Sampling Technique
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Clustering, in data mining, is a useful technique for discoveringinte resting data distributions and patterns in the underlying data, and has many app lication fields, such as statistical data analysis, pattern recognition, image p rocessing, and etc. We combine sampling technique with DBSCAN alg orithm to cluster large spatial databases, and two sampling-based DBSCAN (SDBSC A N) algorithms are developed. One algorithm introduces sampling technique inside DBSCAN, and the other uses sampling procedure outside DBSCAN. Experimental resul ts demonstrate that our algorithms are effective and efficient in clustering lar ge-scale spatial databases.
National Research Council Canada - National Science Library
Liu, XiangShao; Zhou, Shangbo; Li, Hua; Li, Kun
2016-01-01
In this article, a bidirectional feature matching algorithm and two extended algorithms based on the priority k-d tree search are presented for the image registration using scale-invariant feature transform features...
Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Shuai Li
2008-03-01
Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
DEFF Research Database (Denmark)
Farhang, Peyman; Drimus, Alin; Mátéfi-Tempfli, Stefan
2015-01-01
the optimal solutions is run in MATLAB while it is interfaced with LTspice for simulation of the circuit using actual component models obtained from manufacturers. The PSO is utilized to solve the optimization problem in order to find the optimal parameters of MPID and PID controllers. The performances......In this paper, a new technique is proposed to design a Modified PID (MPID) controller for a Boost converter. An interface between LTspice and MATLAB is carried out to implement the Particle Swarm Optimization (PSO) algorithm. The PSO algorithm which has the appropriate capability to find out...
An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space
Kwan, Trevor Hocksun; Wu, Xiaofeng
2017-03-01
Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.
电压空间矢量脉宽调制的简单算法%A Simple Algorithm of Voltage Space Vector PWM
Institute of Scientific and Technical Information of China (English)
袁帆
2015-01-01
在电压空间矢量脉宽调制(SVPWM)控制技术原理的基础上，提出了一种简单快速算法，该算法只有普通的四则运算，计算变得非常简单，消除了常规的空间矢量控制算法中由于三角函数和无理数的近似计算而带来的计算误差及影响计算精度和速度的缺点，使结果更加准确。因而更适合应用于基于DSP 的数字化控制。理论分析和大量的MATLAB仿真结果表明，该算法具有谐波含量低、直流电压利用率高的优点。%Based on the the principle of space vector pulse width modulation technology. A novel algorithm of space vector PWM was put forward. The algorithm has only normal arithmetic operations. The algorithm needs only four arith-metic operations ,so that the algorithm may become very easy and it eliminates the calculation errors resulted from the ap-proximate calculation of trigonometric functions and irrational numbers and the defects impacting calculation accuracy and speed ,thus the result would be more precise and be more applicable to the digital control based on DSP,theoretical analy-ses and large numbers of MATLAB simulations have proved that the algorithm has the advantages of little harmonic content and high utilization coefficient of DC voltage.
Improved dq Transform Algorithm for Dynamic Voltage Restorer Detection%改进dq变换的动态电压恢复器检测新方法
Institute of Scientific and Technical Information of China (English)
黄永红; 施慧; 徐俊俊; 张云帅
2016-01-01
To meet the real-time and accuracy requirements for voltage sag detection in dynamic voltage restorer,an im⁃proved dq transform algorithm based on adaptive least mean square(LMS)filter and software phase-locked loop is pro⁃posed.The adaptive LMS algorithm with time delayed feedback is applied to the digital filter,which is applied to the control process of software phase-locked loop. The filtering process is conducted in advance,where instead of the tradi⁃tional low pass filter,derivative method is used to isolate the DC component under dq coordinate system instantaneous⁃ly. The proposed method can improve the accuracy of voltage sampling value,accomplish phase lock,and improve the accuracy and response speed of voltage sag detection.A simulation model built in PSCAD/EMTDC validates the ef⁃fectiveness of the proposed method .%为了满足动态电压恢复器的电压暂降检测实时性和准确性要求，提出了基于自适应最小均方LMS（least mean square）滤波器及其软件锁相环的改进dq变换新方法。结合自适应LMS算法与延时正反馈构成数字滤波器，将其应用于dq变换的软件锁相环控制过程中。并使滤波环节提前，采用求导法代替传统低通滤波器瞬时分离出dq坐标系下的直流分量。该方法可提高电压采样值的准确度，实现有效锁相，提高电压暂降检测精度及响应速度。通过PSCAD/EMTDC进行仿真验证，结果表明了该方法的有效性。
A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.
Directory of Open Access Journals (Sweden)
Xiangyun Xiao
Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.
基于Markov模型的动态电压调节策略%Dynamic voltage scaling policy based on Markov model
Institute of Scientific and Technical Information of China (English)
卜爱国
2011-01-01
基于Markov模型,针对具有离散工作电压模式的处理器提出了一种动态电压调节策略MKBVSP( Markov based voltage scaling policy).MKBVSP能够根据工作负载的需求变化实现处理器工作模式的动态切换,达到系统性能与能耗之间的平衡.实验结果表明,MKBVSP策略能够在更大程度上降低系统平均能耗,最大比率可达58％.%Based on Markov model, this paper presented a voltage scaling policy called Markov based voltage scaling policy for processor with discrete voltage levels. The supply voltage of process could be switched dynamically by MKBVSP according the variance of system workload, and the balance of system between the performance and the power could be obtained thereby. Experimental results demonstrate that the power consumption can be further reduced with the implementation of MKBVSP, which can be far decreased by 58%.
Gharsalli, Leila; Mohammad-Djafari, Ali; Fraysse, Aurélia; Rodet, Thomas
2013-08-01
Our aim is to solve a linear inverse problem using various methods based on the Variational Bayesian Approximation (VBA). We choose to take sparsity into account via a scale mixture prior, more precisely a student-t model. The joint posterior of the unknown and hidden variable of the mixtures is approximated via the VBA. To do this approximation, classically the alternate algorithm is used. But this method is not the most efficient. Recently other optimization algorithms have been proposed; indeed classical iterative algorithms of optimization such as the steepest descent method and the conjugate gradient have been studied in the space of the probability densities involved in the Bayesian methodology to treat this problem. The main object of this work is to present these three algorithms and a numerical comparison of their performances.
Directory of Open Access Journals (Sweden)
Li Ding
2015-01-01
Full Text Available The purpose of this paper is devoted to developing a chaotic artificial bee colony algorithm (CABC for the system identification of a small-scale unmanned helicopter state-space model in hover condition. In order to avoid the premature of traditional artificial bee colony algorithm (ABC, which is stuck in local optimum and can not reach the global optimum, a novel chaotic operator with the characteristics of ergodicity and irregularity was introduced to enhance its performance. With input-output data collected from actual flight experiments, the identification results showed the superiority of CABC over the ABC and the genetic algorithm (GA. Simulations are presented to demonstrate the effectiveness of our proposed algorithm and the accuracy of the identified helicopter model.
Novel Zooming Scale Hough Transform Pattern Recognition Algorithm for the PHENIX Detector
Koblesky, Theodore
2012-03-01
Single ultra-relativistic heavy ion collisions at RHIC and the LHC and multiple overlapping proton-proton collisions at the LHC present challenges to pattern recognition algorithms for tracking in these high multiplicity environments. One must satisfy many constraints including high track finding efficiency, ghost track rejection, and CPU time and memory constraints. A novel algorithm based on a zooming scale Hough Transform is now available in Ref [1] that is optimized for efficient high speed caching and flexible in terms of its implementation. In this presentation, we detail the application of this algorithm to the PHENIX Experiment silicon vertex tracker (VTX) and show initial results from Au+Au at √sNN = 200 GeV collision data taken in 2011. We demonstrate the current algorithmic performance and also show first results for the proposed sPHENIX detector. [4pt] Ref [1] Dr. Dion, Alan. ``Helix Hough'' http://code.google.com/p/helixhough/
Scale-space point spread function based framework to boost infrared target detection algorithms
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2016-07-01
Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.
2015-02-04
Yang Runner up Best PhD thesis 2014 Rong Yang best research assistant computer science department USC 2013 Wanger Prize for Excellence in...show with a realization called COCOMO. Therefore, we presented a novel cutting-plane algorithm called BLADE to scale-up SSGs with complex adversary
Directory of Open Access Journals (Sweden)
Tiannan Ma
2016-12-01
Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.
Investigating Darcy-scale assumptions by means of a multiphysics algorithm
Tomin, Pavel; Lunati, Ivan
2016-09-01
Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative
Low-power task scheduling algorithm for large-scale cloud data centers
Institute of Scientific and Technical Information of China (English)
Xiaolong Xu; Jiaxing Wu; Geng Yang; Ruchuan Wang
2013-01-01
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This pa-per presents a novel low-power task scheduling algorithm (LTSA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is ful y consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and per-formance analysis show that the proposed algorithm can effec-tively improve the node utilization, and reduce the overal power consumption of the cloud data center.
Newns, Dennis M; Elmegreen, Bruce G; Liu, Xiao-Hu; Martyna, Glenn J
2012-07-17
Field effect transistors are reaching the limits imposed by the scaling of materials and the electrostatic gating physics underlying the device. In this Communication, a new type of switch based on different physics, which combines known piezoelectric and piezoresistive materials, is described and is shown by theory and simulation to achieve gigahertz digital switching at low voltage (0.1 V).
Directory of Open Access Journals (Sweden)
Guo Jiao
2014-08-01
Full Text Available A system impulse response with low sidelobes is critical in synthetic aperture radar (SAR images because sidelobes contribute to noise and interfere with nearby scatterers. However, the conventional tricks of sidelobe suppression are unable to be exactly applied to the case of spaceborne sliding spotlight SAR due to great azimuth shifts in both time and frequency domains. In this paper, an extended chirp scaling algorithm is presented for spaceborne sliding spotlight SAR data imaging. The proposed algorithm firstly uses the spectral analysis (SPECAN technique to avoid the azimuth spectrum folding effect and then employs the chirp scaling (CS algorithm to achieve data focusing, i.e., the so-called two-step approach. To suppress the sidelobe level, an efficient strategy for the azimuth spectral weighting which only involves matrix multiplications and short fast Fourier transformations (FFTs is proposed, which is a post-process executed on the focused SAR image and particularly simple to be implemented. The SAR image processed by the proposed extended CS algorithm is very precise and perfectly phase-preserving. In the end, computer simulation results verify the analysis and confirm the validity of the proposed algorithm.
Institute of Scientific and Technical Information of China (English)
Guo Jiao; Xu Youshuan; Fu Longsheng
2014-01-01
A system impulse response with low sidelobes is critical in synthetic aperture radar (SAR) images because sidelobes contribute to noise and interfere with nearby scatterers. However, the conventional tricks of sidelobe suppression are unable to be exactly applied to the case of space-borne sliding spotlight SAR due to great azimuth shifts in both time and frequency domains. In this paper, an extended chirp scaling algorithm is presented for spaceborne sliding spotlight SAR data imaging. The proposed algorithm firstly uses the spectral analysis (SPECAN) technique to avoid the azimuth spectrum folding effect and then employs the chirp scaling (CS) algorithm to achieve data focusing, i.e., the so-called two-step approach. To suppress the sidelobe level, an efficient strategy for the azimuth spectral weighting which only involves matrix multiplications and short fast Fourier transformations (FFTs) is proposed, which is a post-process executed on the focused SAR image and particularly simple to be implemented. The SAR image processed by the proposed extended CS algorithm is very precise and perfectly phase-preserving. In the end, computer simulation results verify the analysis and confirm the validity of the proposed algorithm.
Power law scaling for the adiabatic algorithm for search engine ranking
Frees, Adam; Rudinger, Kenneth; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S N
2012-01-01
An important method for search engine result ranking works by finding the principal eigenvector of the "Google matrix." Recently, a quantum algorithm for this problem and evidence of an exponential speedup for some scale-free networks were presented. Here, we show that the run-time depends on features of the graphs other than the degree distribution, and can be altered sufficiently to rule out a general exponential speedup. For a sample of graphs with degree distributions that more closely resemble the Web than in previous work, the proposed algorithm does not appear to run exponentially faster than the classical case.
Control Algorithms for Large-scale Single-axis Photovoltaic Trackers
Directory of Open Access Journals (Sweden)
Dorian Schneider
2012-01-01
Full Text Available The electrical yield of large-scale photovoltaic power plants can be greatly improved by employing solar trackers. While fixed-tilt superstructures are stationary and immobile, trackers move the PV-module plane in order to optimize its alignment to the sun. This paper introduces control algorithms for single-axis trackers (SAT, including a discussion for optimal alignment and backtracking. The results are used to simulate and compare the electrical yield of fixed-tilt and SAT systems. The proposed algorithms have been field tested, and are in operation in solar parks worldwide.
Noguchi, Toshihiko; Imoto, Masaru; Sato, Yoshikazu
This paper proposes a novel three-phase power distribution system feeding trapezoidal voltages to various power electronic loads with diode rectifier front-ends. The network distributes trapezoidal voltages generated by synchronous superposition of wave-shaping voltages onto sinusoidal voltages available from a utility power grid. The power distribution by the trapezoidal voltages allows reducing harmonics of the line currents without electronic switching devices because of a spontaneously widened conduction period of the current waveforms. The reduction of the harmonic currents also contributes to improve total power factor at the load input terminals and efficiency of the power transmission cables. Since the diodes of the rectifiers successively commutate the trapezoidal waves during periods of their flat parts, not only total harmonic distortion of the currents is improved, but also voltage ripple across the dc-buses of the rectifiers can effectively be reduced with less filter capacitors. In addition, the system offers an uninterruptible power supply function by immediately changing its outputs from the wave-shaping voltages to the trapezoidal voltages when interruption occurs in the power grid. In this paper, a prototype of the system is experimentally examined from various angles of operating characteristics and test results are presented to prove feasibility of the proposed system.
Scalable fault tolerant algorithms for linear-scaling coupled-cluster electronic structure methods.
Energy Technology Data Exchange (ETDEWEB)
Leininger, Matthew L.; Nielsen, Ida Marie B.; Janssen, Curtis L.
2004-10-01
By means of coupled-cluster theory, molecular properties can be computed with an accuracy often exceeding that of experiment. The high-degree polynomial scaling of the coupled-cluster method, however, remains a major obstacle in the accurate theoretical treatment of mainstream chemical problems, despite tremendous progress in computer architectures. Although it has long been recognized that this super-linear scaling is non-physical, the development of efficient reduced-scaling algorithms for massively parallel computers has not been realized. We here present a locally correlated, reduced-scaling, massively parallel coupled-cluster algorithm. A sparse data representation for handling distributed, sparse multidimensional arrays has been implemented along with a set of generalized contraction routines capable of handling such arrays. The parallel implementation entails a coarse-grained parallelization, reducing interprocessor communication and distributing the largest data arrays but replicating as many arrays as possible without introducing memory bottlenecks. The performance of the algorithm is illustrated by several series of runs for glycine chains using a Linux cluster with an InfiniBand interconnect.
Directory of Open Access Journals (Sweden)
Urošev Nataša
2010-01-01
Full Text Available Beside the main subject, this paper will present principals of average graphical weight of thematic map. The algorithm is based on this subject and also on basics of symbol-scaled(2 mapping. Maps are very often overloaded by symbols. The main goal of thematic mapping is to represent the quantitative and qualitative characteristics of the occurrences in geospace. These two factors are combined in this algorithm. Modern GIS software has possibility to create thematic maps by methods of signs, cartograms and cart diagrams. These are very precise maps and they are presenting distinctive occurrence but also very graphical chaotic with lot of overcastted areas that are overloading the map. These are the reasons why it’s necessary to create this algorithm which would give some balance among boundaries of thematic map area.
A Fast Algorithm for Large-Scale MDP-Based Systems in Smart Grid
Directory of Open Access Journals (Sweden)
Hua Xiao
2013-01-01
Full Text Available In this study, we investigate the fast algorithms for the Large-Scale Markov Decision Process (LSMDP problem in smart gird. Markov decision process is one of the efficient mathematical tools to solve the control and optimization problems in wireless smart grid systems. However, the complexity and the memory requirements exponentially increase when the number of system state grows in. Moreover, the limited computational ability and small size of memory on board constraint the application of wireless smart grid systems. As a result, it is impractical to implement those LSMDP-based approaches in such systems. Therefore, we propose the fast algorithm with low computational overhead and good performance in this study. We first derive the factored MDP representation, which substitutes LSMDP in a compact way. Based on the factored MDP, we propose the fast algorithm, which considerably reduces the size of state space and remains reasonable performance compared to the optimal solution.
Focusing of tandem bistatic SAR data using the chirp-scaling algorithm
Chen, Shichao; Xing, Mengdao; Zhou, Song; Zhang, Lei; Bao, Zheng
2013-12-01
Based on an exact analytical bistatic point target spectrum, an efficient chirp-scaling algorithm is proposed to correct the range cell migration of different range gates to the one of the reference range for tandem bistatic synthetic aperture radar data processing. The length of the baseline (baseline to range ratio) does not give a direct influence to the proposed algorithm, which can be applied to the processing of tandem bistatic data with a large baseline even when the baseline is equal to the range. No interpolation is needed during the entire processing, only fast Fourier transforms and phase multiplications are needed, which result in efficiency. The validity of the proposed algorithm has been verified by simulated experiments.
Puri, Akshat; The ATLAS collaboration
2017-01-01
ATLAS uses a jet reconstruction algorithm in heavy ion collisions that takes as input calorimeter towers of size $0.1 \\times \\pi/32$ in $\\Delta\\eta \\times \\Delta\\phi$ and iteratively determines the underlying event background. This algorithm, which is different from the standard jet reconstruction used in ATLAS, is also used in the proton-proton collisions used as reference data for the Pb+Pb and p+Pb. This poster provides details of the heavy ion jet reconstruction algorithm and its performance in pp collisions. The calibration procedure is described in detail and cross checks using photon- jet balance are shown. The uncertainties on the jet energy scale and the jet energy resolution are described.
MREG V1.1 : a multi-scale image registration algorithm for SAR applications.
Energy Technology Data Exchange (ETDEWEB)
Eichel, Paul H.
2013-08-01
MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962 leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.
Effective Hierarchical Routing Algorithm for Large-scale Wireless Mobile Networks
Directory of Open Access Journals (Sweden)
Guofeng Yan
2014-02-01
Full Text Available The growing interest in wireless mobile network techniques has resulted in many routing protocol proposals. The unpredictable motion and the unreliable behavior of mobile nodes is one of the key issues in wireless mobile network. Virtual mobile node (VMN consists of robust virtual nodes that are both predictable and reliable. Based on VMN, in this paper, we present a hierarchical routing algorithm, i.e., EHRA-WAVE, for large-scale wireless mobile networks. By using mobile WAVE technology, a routing path can be found rapidly between VMNs without accurate topology information. We introduce the routing algorithm and the implementation issues of the proposed EHRA-WAVE routing algorithm. Finally, we evaluate the performance of EHRA-WAVE through experiments, and compare the performance on VMN failure and message delivery ratio using hierarchical and non-hierarchical routing methods. However, due to the large amounts WAVE flooding, EHRAWAVE results in too large load which would impede the application of the EHRA-WAVE algorithm. Therefore, the further routing protocol focuses on minimizing the number of WAVE using hierarchical structures in large-scale wireless mobile networks.
Power-law scaling for the adiabatic algorithm for search-engine ranking
Frees, Adam; Gamble, John King; Rudinger, Kenneth; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.
2013-09-01
An important method for search engine result ranking works by finding the principal eigenvector of the “Google matrix.” Recently, a quantum algorithm for generating this eigenvector as a quantum state was presented, with evidence of an exponential speedup of this process for some scale-free networks. Here we show that the run time depends on features of the graphs other than the degree distribution, and can be altered sufficiently to rule out a general exponential speedup. According to our simulations, for a sample of graphs with degree distributions that are scale-free, with parameters thought to closely resemble the Web, the proposed algorithm for eigenvector preparation does not appear to run exponentially faster than the classical case.
Roverso, Davide
2003-08-01
Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.
Lee, Dong-Hoon; Lee, Do-Wan; Han, Bong-Soo
2016-01-01
The purpose of this study is an application of scale invariant feature transform (SIFT) algorithm to stitch the cervical-thoracic-lumbar (C-T-L) spine magnetic resonance (MR) images to provide a view of the entire spine in a single image. All MR images were acquired with fast spin echo (FSE) pulse sequence using two MR scanners (1.5 T and 3.0 T). The stitching procedures for each part of spine MR image were performed and implemented on a graphic user interface (GUI) configuration. Moreover, the stitching process is performed in two categories; manual point-to-point (mPTP) selection that performed by user specified corresponding matching points, and automated point-to-point (aPTP) selection that performed by SIFT algorithm. The stitched images using SIFT algorithm showed fine registered results and quantitatively acquired values also indicated little errors compared with commercially mounted stitching algorithm in MRI systems. Our study presented a preliminary validation of the SIFT algorithm application to MRI spine images, and the results indicated that the proposed approach can be performed well for the improvement of diagnosis. We believe that our approach can be helpful for the clinical application and extension of other medical imaging modalities for image stitching.
Road network selection for small-scale maps using an improved centrality-based algorithm
Directory of Open Access Journals (Sweden)
Roy Weiss
2014-12-01
Full Text Available The road network is one of the key feature classes in topographic maps and databases. In the task of deriving road networks for products at smaller scales, road network selection forms a prerequisite for all other generalization operators, and is thus a fundamental operation in the overall process of topographic map and database production. The objective of this work was to develop an algorithm for automated road network selection from a large-scale (1:10,000 to a small-scale database (1:200,000. The project was pursued in collaboration with swisstopo, the national mapping agency of Switzerland, with generic mapping requirements in mind. Preliminary experiments suggested that a selection algorithm based on betweenness centrality performed best for this purpose, yet also exposed problems. The main contribution of this paper thus consists of four extensions that address deficiencies of the basic centrality-based algorithm and lead to a significant improvement of the results. The first two extensions improve the formation of strokes concatenating the road segments, which is crucial since strokes provide the foundation upon which the network centrality measure is computed. Thus, the first extension ensures that roundabouts are detected and collapsed, thus avoiding interruptions of strokes by roundabouts, while the second introduces additional semantics in the process of stroke formation, allowing longer and more plausible strokes to built. The third extension detects areas of high road density (i.e., urban areas using density-based clustering and then locally increases the threshold of the centrality measure used to select road segments, such that more thinning takes place in those areas. Finally, since the basic algorithm tends to create dead-ends—which however are not tolerated in small-scale maps—the fourth extension reconnects these dead-ends to the main network, searching for the best path in the main heading of the dead-end.
Eight-Scale Image Contrast Enhancement Based on Adaptive Inverse Hyperbolic Tangent Algorithm
Directory of Open Access Journals (Sweden)
Cheng-Yi Yu
2014-10-01
Full Text Available The Eight-Scale parameter adjustment is a natural extension of Adaptive Inverse Hyperbolic Tangent (AIHT algorithm. It has long been known that the Human Vision System (HVS heavily depends on detail and edge in the understanding and perception of scenes. The main goal of this study is to produce a contrast enhancement technique to recover an image from blurring and darkness, and at the same time to improve visual quality. Eight-scale coefficient adjustments can provide a further local refinement in detail under the AIHT algorithm. The proposed Eight-Scale Adaptive Inverse Hyperbolic Tangent (8SAIHT method uses the sub-band to calculate the local mean and local variance before the AIHT algorithm is applied. This study also shows that this approach is convenient and effective in the enhancement processes for various types of images. The 8SAIHT is also capable of adaptively enhancing the local contrast of the original image while simultaneously extruding more on object details.
Gündüç, Semra; Dilaver, Mehmet; Aydın, Meral; Gündüç, Yiğit
2005-02-01
In this work we have studied the dynamic scaling behavior of two scaling functions and we have shown that scaling functions obey the dynamic finite size scaling rules. Dynamic finite size scaling of scaling functions opens possibilities for a wide range of applications. As an application we have calculated the dynamic critical exponent (z) of Wolff's cluster algorithm for 2-, 3- and 4-dimensional Ising models. Configurations with vanishing initial magnetization are chosen in order to avoid complications due to initial magnetization. The observed dynamic finite size scaling behavior during early stages of the Monte Carlo simulation yields z for Wolff's cluster algorithm for 2-, 3- and 4-dimensional Ising models with vanishing values which are consistent with the values obtained from the autocorrelations. Especially, the vanishing dynamic critical exponent we obtained for d=3 implies that the Wolff algorithm is more efficient in eliminating critical slowing down in Monte Carlo simulations than previously reported.
Directory of Open Access Journals (Sweden)
Lorenzo L. Pesce
2013-01-01
Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.
Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm
Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel
2014-05-01
Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.
Large-scale prediction of microRNA-disease associations by combinatorial prioritization algorithm
Yu, Hua; Chen, Xiaojun; Lu, Lu
2017-03-01
Identification of the associations between microRNA molecules and human diseases from large-scale heterogeneous biological data is an important step for understanding the pathogenesis of diseases in microRNA level. However, experimental verification of microRNA-disease associations is expensive and time-consuming. To overcome the drawbacks of conventional experimental methods, we presented a combinatorial prioritization algorithm to predict the microRNA-disease associations. Importantly, our method can be used to predict microRNAs (diseases) associated with the diseases (microRNAs) without the known associated microRNAs (diseases). The predictive performance of our proposed approach was evaluated and verified by the internal cross-validations and external independent validations based on standard association datasets. The results demonstrate that our proposed method achieves the impressive performance for predicting the microRNA-disease association with the Area Under receiver operation characteristic Curve (AUC), 86.93%, which is indeed outperform the previous prediction methods. Particularly, we observed that the ensemble-based method by integrating the predictions of multiple algorithms can give more reliable and robust prediction than the single algorithm, with the AUC score improved to 92.26%. We applied our combinatorial prioritization algorithm to lung neoplasms and breast neoplasms, and revealed their top 30 microRNA candidates, which are in consistent with the published literatures and databases.
Directory of Open Access Journals (Sweden)
B. Y. Qu
2017-01-01
Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.
Sun, Pengfei; Sun, Changku; Li, Wenqiang; Wang, Peng
2015-01-01
Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI), an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF) motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.
Directory of Open Access Journals (Sweden)
Pengfei Sun
Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.
On the rejection-based algorithm for simulation and analysis of large-scale reaction networks
Energy Technology Data Exchange (ETDEWEB)
Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)
2015-06-28
Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.
Large-scale prediction of microRNA-disease associations by combinatorial prioritization algorithm
Yu, Hua; Chen, Xiaojun; Lu, Lu
2017-01-01
Identification of the associations between microRNA molecules and human diseases from large-scale heterogeneous biological data is an important step for understanding the pathogenesis of diseases in microRNA level. However, experimental verification of microRNA-disease associations is expensive and time-consuming. To overcome the drawbacks of conventional experimental methods, we presented a combinatorial prioritization algorithm to predict the microRNA-disease associations. Importantly, our method can be used to predict microRNAs (diseases) associated with the diseases (microRNAs) without the known associated microRNAs (diseases). The predictive performance of our proposed approach was evaluated and verified by the internal cross-validations and external independent validations based on standard association datasets. The results demonstrate that our proposed method achieves the impressive performance for predicting the microRNA-disease association with the Area Under receiver operation characteristic Curve (AUC), 86.93%, which is indeed outperform the previous prediction methods. Particularly, we observed that the ensemble-based method by integrating the predictions of multiple algorithms can give more reliable and robust prediction than the single algorithm, with the AUC score improved to 92.26%. We applied our combinatorial prioritization algorithm to lung neoplasms and breast neoplasms, and revealed their top 30 microRNA candidates, which are in consistent with the published literatures and databases. PMID:28317855
Modeling wall effects in a micro-scale shock tube using hybrid MD-DSMC algorithm
Watvisave, D. S.; Puranik, B. P.; Bhandarkar, U. V.
2016-07-01
Wall effects in a micro-scale shock tube are investigated using the Direct Simulation Monte Carlo method as well as a hybrid Molecular Dynamics-Direct Simulation Monte Carlo algorithm. In the Direct Simulation Monte Carlo simulations, the Cercignani-Lampis-Lord model of gas-surface interactions is employed to incorporate the wall effects, and it is shown that the shock attenuation is significantly affected by the choice of the values of tangential momentum accommodation coefficient. A loosely coupled Molecular Dynamics-Direct Simulation Monte Carlo approach is then employed to demonstrate incomplete accommodation in micro-scale shock tube flows. This approach uses fixed values of the accommodation coefficients in the gas-surface interaction model, with their values determined from a separate dynamically similar Molecular Dynamics simulation. Finally, a completely coupled Molecular Dynamics-Direct Simulation Monte Carlo algorithm is used, wherein the bulk of the flow is modeled using Direct Simulation Monte Carlo, while the interaction of gas molecules with the shock tube walls is modeled using Molecular Dynamics. The two regions are separate and coupled both ways using buffer zones and a bootstrap coupling algorithm that accounts for the mismatch of the number of molecules in both regions. It is shown that the hybrid method captures the effect of local properties that cannot be captured using a single value of accommodation coefficient for the entire domain.
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi
2015-11-01
A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP.
Directory of Open Access Journals (Sweden)
Ambika Ramamoorthy
2016-01-01
Full Text Available Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF and weak (WK bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5 and PQ capacities of DGs (P alone, Q alone, and P and Q both are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Institute of Scientific and Technical Information of China (English)
胡晓波; 杜娟丽
2016-01-01
为了实现电压闪变参数的精确识别,针对采用常规算法求取的电压闪变的调幅波、间谐波参数波动大的问题,首次提出了将局部均值分解(LMD)算法应用于电压闪变参数识别.选取典型的电压闪变算例,应用LMD算法和HHT算法进行仿真对比,结果表明,LMD通过除以包络函数得到PF,"筛分"次数较少,端点效应较小,所求的闪变参数在稳态时基本不变化.该方法步骤简单、运算速度快,证明了LMD算法的可行性与准确性.%To realize high-accuracy measurement parameters of voltage flicker,focusing on the amplitude modulation wave and inter-harmonics parameter fluctuation problem in voltage flicker based on conventional algorithm ,the local mean decomposition(LMD)algorithm is adopted to analyze voltage flicker parameter identification in power system for the first time. The paper selects typical voltage flicker signal,compares the simulation results by LMD algorithm and HHT algorithm. Simulation results show that because the LMD algorithm gets a PF by dividing the envelope function and the number of"Screening"is relatively less,the end effect is small. The flicker parameters in the steady state basically do not change based on LMD. LMD is superiority,the method is simple,rapid and high accuracy. The results show LMD algorithm is feasible and accurate.
Power law scaling for the adiabatic algorithm for search engine ranking
Frees, Adam; King Gamble, John; Rudinger, Kenneth; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.
2013-03-01
An important method for search engine result ranking works by finding the principal eigenvector of the ``Google matrix.'' Recently, a quantum algorithm for this problem and evidence of an exponential speedup for some scale-free networks were presented. Here, we show that the run-time depends on features of the graphs other than the degree distribution, and can be altered sufficiently to rule out a general exponential speedup. For a sample of graphs with degree distributions that more closely resemble the Web than in the previous work, the proposed algorithm does not appear to run exponentially faster than the classical one. This work was supported in part by ARO, DOD (W911NF-09-1-0439) and NSF (CCR-0635355, DMR 0906951). A.F. acknowledges support from the NSF REU program (PHY-PIF-1104660)
Lu, Jianfeng
2016-01-01
The particle-particle random phase approximation (pp-RPA) has been shown to be capable of describing double, Rydberg, and charge transfer excitations, for which the conventional time-dependent density functional theory (TDDFT) might not be suitable. It is thus desirable to reduce the computational cost of pp-RPA so that it can be efficiently applied to larger molecules and even solids. This paper introduces an $O(N^3)$ algorithm, where $N$ is the number of orbitals, based on an interpolative separable density fitting technique and the Jacobi-Davidson eigensolver to calculate a few low-lying excitations in the pp-RPA framework. The size of the pp-RPA matrix can also be reduced by keeping only a small portion of orbitals with orbital energy close to the Fermi energy. This reduced system leads to a smaller prefactor of the cubic scaling algorithm, while keeping the accuracy for the low-lying excitation energies.
A heuristic path-estimating algorithm for large-scale real-time traffic information calculating
Institute of Scientific and Technical Information of China (English)
2008-01-01
As the original Global Position System (GPS) data in Floating Car Data have the accuracy problem,this paper proposes a heuristic path-estimating algorithm for large-scale real-time traffic information calculating. It uses the heuristic search method,imports the restriction with geometric operation,and makes comparison between the vectors composed of the vehicular GPS points and the special road network model to search the set of vehicular traveling route candidates. Finally,it chooses the most optimal one according to weight. Experimental results indicate that the algorithm has considerable efficiency in accuracy (over 92.7%) and com-putational speed (max 8000 GPS records per second) when handling the GPS tracking data whose sampling rate is larger than 1 min even under complex road network conditions.
Application of Large-Scale Inversion Algorithms to Hydraulic Tomography in an Alluvial Aquifer.
Fischer, P; Jardani, A; Soueid Ahmed, A; Abbas, M; Wang, X; Jourde, H; Lecoq, N
2017-03-01
Large-scale inversion methods have been recently developed and permitted now to considerably reduce the computation time and memory needed for inversions of models with a large amount of parameters and data. In this work, we have applied a deterministic geostatistical inversion algorithm to a hydraulic tomography investigation conducted in an experimental field site situated within an alluvial aquifer in Southern France. This application aims to achieve a 2-D large-scale modeling of the spatial transmissivity distribution of the site. The inversion algorithm uses a quasi-Newton iterative process based on a Bayesian approach. We compared the results obtained by using three different methodologies for sensitivity analysis: an adjoint-state method, a finite-difference method, and a principal component geostatistical approach (PCGA). The PCGA is a large-scale adapted method which was developed for inversions with a large number of parameters by using an approximation of the covariance matrix, and by avoiding the calculation of the full Jacobian sensitivity matrix. We reconstructed high-resolution transmissivity fields (composed of up to 25,600 cells) which generated good correlations between the measured and computed hydraulic heads. In particular, we show that, by combining the PCGA inversion method and the hydraulic tomography method, we are able to substantially reduce the computation time of the inversions, while still producing high-quality inversion results as those obtained from the other sensitivity analysis methodologies.
Institute of Scientific and Technical Information of China (English)
Hong Xia YIN; Dong Lei DU
2007-01-01
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.
Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.
Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk
2015-01-01
Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system
GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization
Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie
2016-04-01
Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.
Directory of Open Access Journals (Sweden)
Jian Wang
2014-01-01
Full Text Available A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality.
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
Solving for Micro- and Macro- Scale Electrostatic Configurations Using the Robin Hood Algorithm
Formaggio, J A; Corona, T J; Stefancic, H; Abraham, H; Gluck, F
2011-01-01
We present a novel technique by which highly-segmented electrostatic configurations can be solved. The Robin Hood method is a matrix-inversion algorithm optimized for solving high density boundary element method (BEM) problems. We illustrate the capabilities of this solver by studying two distinct geometry scales: (a) the electrostatic potential of a large volume beta-detector and (b) the field enhancement present at surface of electrode nano-structures. Geometries with elements numbering in the O(10^5) are easily modeled and solved without loss of accuracy. The technique has recently been expanded so as to include dielectrics and magnetic materials.
Mélin, Régis; Caputo, Jean-Guy; Yang, Kang; Douçot, Benoît
2017-02-01
A three-terminal Josephson junction consists of three superconductors coupled coherently to a small nonsuperconducting island, such as a diffusive metal, a single or double quantum dot. A specific resonant single quantum dot three-terminal Josephson junction (Sa,Sb,Sc) biased with voltages (V ,-V ,0 ) is considered, but the conclusions hold more generally for resonant semiconducting quantum wire setups. A simple physical picture of the steady state is developed, using Floquet theory. It is shown that the equilibrium Andreev bound states (for V =0 ) evolve into nonequilibrium Floquet-Wannier-Stark-Andreev (FWS-Andreev) ladders of resonances (for V ≠0 ). These resonances acquire a finite width due to multiple Andreev reflection (MAR) processes. We also consider the effect of an extrinsic linewidth broadening on the quantum dot, introduced through a Dynes phenomenological parameter. The dc-quartet current manifests a crossover between the extrinsic relaxation dominated regime at low voltage to an intrinsic relaxation due to MAR processes at higher voltage. Finally, we study the coupling between the two FWS-Andreev ladders due to Landau-Zener-Stückelberg transitions, and its effect on the crossover in the relaxation mechanism. Three important low-energy scales are identified, and a perspective is to relate those low-energy scales to a recent noise cross-correlation experiment (Y. Cohen et al., arXiv:1606.08436).
Conceptual design based on scale laws and algorithms for sub-critical transmutation reactors
Energy Technology Data Exchange (ETDEWEB)
Lee, Kwang Gu; Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1997-12-31
In order to conduct the effective integration of computer-aided conceptual design for integrated nuclear power reactor, not only is a smooth information flow required, but also decision making for both conceptual design and construction process design must be synthesized. In addition to the aboves, the relations between the one step and another step and the methodologies to optimize the decision variables are verified, in this paper especially, that is, scaling laws and scaling criteria. In the respect with the running of the system, the integrated optimization process is proposed in which decisions concerning both conceptual design are simultaneously made. According to the proposed reactor types and power levels, an integrated optimization problems are formulated. This optimization is expressed as a multi-objective optimization problem. The algorithm for solving the problem is also presented. The proposed method is applied to designing a integrated sub-critical reactors. 6 refs., 5 figs., 1 tab. (Author)
An inertia-free filter line-search algorithm for large-scale nonlinear programming
Energy Technology Data Exchange (ETDEWEB)
Chiang, Nai-Yuan; Zavala, Victor M.
2016-02-15
We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.
CANDIDATE TREE-IN-BUD PATTERN SELECTION AND CLASSIFICATION USING BALL SCALE ENCODING ALGORITHM
Directory of Open Access Journals (Sweden)
T. Akilandeswari
2013-10-01
Full Text Available Asthma, Chronic obstructive pulmonary disease, influenza, pneumonia, tuberculosis, lung cancer and many other breathing problems are the leading causes of death and disability all over the world. These diseases affect the lung. Radiology is a primary assessing method with low specificity of the prediction of the presence of these diseases. Computer Assisted Detection (CAD will help the specialists in detecting one of these diseases in an early stage. A method has been proposed by Ulas Bagci to detect lung abnormalities using Fuzzy connected object estimation, Ball scale encoding and comparing various features extracted from local patches of the lung images (CT scan. In this paper, the Tree-in-Bud patterns are selected after segmentation by using ball scale encoding algorithm.
ShearLab: A Rational Design of a Digital Parabolic Scaling Algorithm
Kutyniok, Gitta; Zhuang, Xiaosheng
2011-01-01
Multivariate problems are typically governed by anisotropic features such as edges in images. A common bracket of most of the various directional representation systems which have been proposed to deliver sparse approximations of such features is the utilization of parabolic scaling. One prominent example is the shearlet system. Our objective in this paper is three-fold: We firstly develop a digital shearlet theory which is rationally designed in the sense that it is the digitization of the existing shearlet theory for continuous data. This implicates that shearlet theory provides a unified treatment of both the continuum and digital realm. Secondly, we analyze the utilization of pseudo-polar grids and the pseudo-polar Fourier transform for digital implementations of parabolic scaling algorithms. We derive an isometric pseudo-polar Fourier transform by careful weighting of the pseudo-polar grid, allowing exploitation of its adjoint for the inverse transform. This leads to a digital implementation of the shear...
Scales of Time Where the Quantum Discord Allows an Efficient Execution of the DQC1 Algorithm
Directory of Open Access Journals (Sweden)
M. Ávila
2014-01-01
Full Text Available The power of one qubit deterministic quantum processor (DQC1 (Knill and Laflamme (1998 generates a nonclassical correlation known as quantum discord. The DQC1 algorithm executes in an efficient way with a characteristic time given by τ=Tr[Un]/2n, where Un is an n qubit unitary gate. For pure states, quantum discord means entanglement while for mixed states such a quantity is more than entanglement. Quantum discord can be thought of as the mutual information between two systems. Within the quantum discord approach the role of time in an efficient evaluation of τ is discussed. It is found that the smaller the value of t/T is, where t is the time of execution of the DQC1 algorithm and T is the scale of time where the nonclassical correlations prevail, the more efficient the calculation of τ is. A Mösbauer nucleus might be a good processor of the DQC1 algorithm while a nuclear spin chain would not be efficient for the calculation of τ.
An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network
Directory of Open Access Journals (Sweden)
Choi Jeonghee
2008-01-01
Full Text Available Abstract So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.
Khatchikian, C.; Sangermano, F.; Kendell, D.; Livdahl, T.
2010-01-01
The present work evaluates the use of species distribution model (SDM) algorithms to classify high density of small container Aedes mosquitoes at a fine scale, in the Bermuda islands. Weekly ovitrap data collected by the Health Department of Bermuda (UK) for the years 2006 and 2007 were used for the models. The models evaluated included the following algorithms: Bioclim, Domain, GARP, logistic regression, and MaxEnt. Models were evaluated according to performance and robustness. The area Receiver Operating Characteristic (ROC) curve was used to evaluate each model’s performance, and robustness was assessed considering the spatial correlation between classification risks for the two datasets. Relative to the other algorithms, logistic regression was the best model for classifying high risk areas, and the maximum entropy approach (MaxEnt) presented the second best performance. We report the importance of covariables for these two models, and discuss the utility of SDMs for vector control efforts and the potential for the development of scripts that automate the task of creating risk assessment maps. PMID:21198711
Performance Characterization of a Rover Navigation Algorithm Using Large-Scale Simulation
Directory of Open Access Journals (Sweden)
Richard Madison
2007-01-01
Full Text Available Autonomous rover navigation is a critical technology for robotic exploration of Mars. Simulation allows more extensive testing of such technologies than would be possible with hardware test beds alone. A large number of simulations, running in parallel, can test an algorithm under many different operating conditions to quickly identify the operational envelope of the technology and identify failure modes that were not discovered in more limited testing. GESTALT is the autonomous navigation algorithm developed for NASA's Mars rovers. ROAMS is a rover simulator developed to support the Mars program. We have integrated GESTALT into ROAMS to test closed-loop, autonomous navigation in simulation. We have developed a prototype capability to run many copies of ROAMS in parallel on a supercomputer, varying input parameters to rapidly explore GESTALT's performance across a parameter space. Using these tools, we have demonstrated that large scale simulation can identify performance limits and unexpected behaviors in an algorithm. Such parallel simulation was able to test approximately 500 parameter combinations in the time required for a single test on a hardware test bed.
Energy Technology Data Exchange (ETDEWEB)
Örnek, Ahmet, E-mail: ahmetornek0302@hotmail.com [Kafkas University, Atatürk Vocational School of Healthcare, 36100 Kars (Turkey); Bulut, Emrah [Sakarya University, Department of Chemistry, 54187 Sakarya (Turkey); Can, Mustafa [Sakarya University, Arifiye Vocational School, 54580 Sakarya (Turkey)
2015-08-15
The carbon-free LiNiPO{sub 4} and cobalt doped LiNi{sub 1−x}Co{sub x}PO{sub 4}/C (x = 0.0–1.0) were synthesized and investigated for high voltage applications (> 4 V) for Li-ion batteries. Nano-scale composites were prepared by handy sol–gel approach using citric acid under slightly reductive gas atmosphere (Ar-H{sub 2}, 85:15%). Structural and morphological characteristics of the powders were revealed by X-ray powder diffraction (XRD), field-emission scanning electron microscopy (FE-SEM), high resolution transmission electron microscopy (HR-TEM) and inductively coupled plasma (ICP). Except for a small impurity phase (Ni{sub 3}P), phase pure samples crystallized in the olivine-lattice structure with a linear relationship between lattice parameters (a, b and c) and chemical composition. The FE-SEM images proved that LiNiPO{sub 4}/C particles (50–80 nm) did not agglomerate, and showed that as the cobalt content was higher agglomeration had increased. The electrochemical properties of all electrodes were investigated by galvanostatic charge–discharge measurements. Substitution of Ni{sup 2} {sup +} by Co{sup 2} {sup +} caused higher electronic conductivities and showed more effective Li{sup +} ion mobility. When the cobalt content is 100%, the capacity reached to a higher level (146.2 mA h g{sup −} {sup 1}) and good capacity retention of 85.1% at the end of the 60 cycles was observed. The cycling voltammogram (CV) revealed that LiCoPO{sub 4}/C electrode improved the electrochemical properties. The Ni{sup 3} {sup +}–Ni{sup 2} {sup +} redox couple was not observed for carbon free LiNiPO{sub 4}. Nevertheless, it was observed that carbon coated LiNiPO{sub 4} sample exhibits a significant oxidation (5.26 V)–reduction (5.08 V) peaks. With this study, characteristics of the LiNi{sub 1−x}Co{sub x}PO{sub 4}/C series were deeply evaluated and discussed. - Highlights: • Structural, morphological and electrochemical effects of Co doped LiNi{sub 1−} {sub x
Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane-wave basis
Schäfer, Tobias; Kresse, Georg
2016-01-01
We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order M{\\o}ller-Plesset perturbation theory (MP2). In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, $\\mathcal O(N^4$), with respect to the system size $N$ and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is solved by an internal basis set extrapolation. The key concept to reduce the scaling of the algorithm is to eliminate all summations over virtual bands which can be elegantly achieved in the Laplace transformed MP2 (LTMP2) formulation using plane-wave basis sets. Analogously, this approach could allow to calculate second order screened exchange (SOSEX) as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.
EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.
Tal-Ezer, Hillel
2016-05-19
Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.
Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.
A novel graph-based partitioning algorithm for large-scale dynamical systems
Kamelian, Saeed; Salahshoor, Karim
2015-01-01
In this paper, a novel graph-based system partitioning approach is proposed to facilitate the design of distributed or decentralised control in large-scale dynamical systems. In large-scale dynamical systems, a decomposition method is required to determine a suitable set of distributed subsystems and their relevant variables. In the proposed approach, a decomposition algorithm starts to generate an overall graph representation of the system model in the form of a new weighted digraph on the basis of a sensitivity analysis concept to quantify the coupling strengths among the system variables in terms of graph edge weights. The produced weighted digraph and its structural information are then used to partition the system model. All the potential system control inputs are first characterised as the main graph vertices, representing fixed subsystems centres. Then, the remaining vertices, representing system states or outputs, are assigned to the created subgraphs. Once the initial grouping is accordingly formed, a merging routine is automatically conducted to merge the small subgraphs in other subgraphs in an iterative searching way to find the smaller cut sizes. Each time a merging occurs, the total cost of the merged configuration, being defined in terms of an averaged linear quadratic regulator (LQR) metric, is used as a novel dynamic performance metric versus total group number reduction to terminate the algorithm for the best grouping result. A chemical industrial process plant is used as a benchmark to assess performance of the proposed methodology to fulfil the system partitioning objective. The output result of the algorithm is then comparatively used for a decentralised non-linear model-based predictive control methodology to demonstrate its ultimate merits.
Energy Technology Data Exchange (ETDEWEB)
Westerly, David C. [Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado 80045 (United States); Mo Xiaohu; DeLuca, Paul M. Jr. [Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin 53705 (United States); Tome, Wolfgang A. [Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin 53705 and Institute of Onco-Physics, Albert Einstein College of Medicine and Division of Medical Physics, Department of Radiation Oncology, Montefiore Medical Center, Bronx, New York 10461 (United States); Mackie, Thomas R. [Department of Medical Physics, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin 53705 and Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin 53792 (United States)
2013-06-15
Purpose: Pencil beam algorithms are commonly used for proton therapy dose calculations. Szymanowski and Oelfke ['Two-dimensional pencil beam scaling: An improved proton dose algorithm for heterogeneous media,' Phys. Med. Biol. 47, 3313-3330 (2002)] developed a two-dimensional (2D) scaling algorithm which accurately models the radial pencil beam width as a function of depth in heterogeneous slab geometries using a scaled expression for the radial kernel width in water as a function of depth and kinetic energy. However, an assumption made in the derivation of the technique limits its range of validity to cases where the input expression for the radial kernel width in water is derived from a local scattering power model. The goal of this work is to derive a generalized form of 2D pencil beam scaling that is independent of the scattering power model and appropriate for use with any expression for the radial kernel width in water as a function of depth. Methods: Using Fermi-Eyges transport theory, the authors derive an expression for the radial pencil beam width in heterogeneous slab geometries which is independent of the proton scattering power and related quantities. The authors then perform test calculations in homogeneous and heterogeneous slab phantoms using both the original 2D scaling model and the new model with expressions for the radial kernel width in water computed from both local and nonlocal scattering power models, as well as a nonlocal parameterization of Moliere scattering theory. In addition to kernel width calculations, dose calculations are also performed for a narrow Gaussian proton beam. Results: Pencil beam width calculations indicate that both 2D scaling formalisms perform well when the radial kernel width in water is derived from a local scattering power model. Computing the radial kernel width from a nonlocal scattering model results in the local 2D scaling formula under-predicting the pencil beam width by as much as 1.4 mm (21%) at
Institute of Scientific and Technical Information of China (English)
Shan Hu; Wenzhen Zhu; Daoyu Hu; XiaoYan Meng; Jinhua Zhang; Weijia Wan; Li Zhou
2015-01-01
Objective To evaluate the feasibility of using a low concentration of contrast medium (Visipaque 270 mgI/mL), low tube voltage, and an advanced image reconstruction algorithm in head and neck computed tomography angiography (CTA). Methods Forty patients (22 men and 18 women; average age 48.7 ± 14.25 years; average body mass index 23.9 ± 3.7 kg/m2) undergoing CTA for suspected vascular diseases were randomly assigned into two groups. Group A (n = 20) was administered 370 mgI/mL contrast medium, and group B (n = 20) was administered 270 mgI/mL contrast medium. Both groups were administered at a rate of 4.8 mL/s and an injection volume of 0.8 mL/kg. Images of group A were obtained with 120 kVp and filtered back projection (FBP) reconstruction, whereas images of group B were obtained with 80 kVp and 80% adaptive iterative statistical reconstruction algorithm (ASiR). The CT values and standard deviations of intracranial arteries and image noise on the corona radiata were measured to calculate the contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR). The beam-hardening artifacts (BHAs) around the skul base were calculated. Two readers evaluated the image quality with volume rendered images using scores from 1 to 5. The values between the two groups were statistical y compared. Results The mean CT value of the intracranial arteries in group B was significantly higher than that in group A (P < 0.001). The CNR and SNR values in group B were also statistical y higher than those in group A (P < 0.001). Image noise and BHAs were not significantly dif erent between the two groups. The image quality score of VR images of in group B was significantly higher than that in group A (P = 0.001). However, the quality scores of axial enhancement images in group B became significantly smal er than those in group A (P< 0.001). The CT dose index volume and dose-length product were decreased by 63.8% and 64%, respectively, in group B (P < 0.001 for both). Conclusion Visipaque
Ippolito, Alessandro; Scotto, Carlo; Altadill, David; Blanch, Estefania
2017-04-01
The OIASA algorithm (Oblique Ionograms Automatic Scaling Algorithm) for the identification of trace of oblique ionograms has been applied to the oblique ionograms produced at Ebro Observatory (Spain) and related to the radiolink between the ionospheric stations of Dourbes (50.1 N, 4.6 E) and Roquetes (40.8 N, 0.5 E). Four different periods of 2015 have been analysed, each of them characterised by the occurrence of geomagnetic storms. The algorithm allows the determination of the Maximum Usable Frequency (MUF) for communication between the transmitter and receiver, and shows a very good capacity in automatically rejecting poor quality ionograms. The behaviour and performance of the autoscaling programs under geomagnetic disturbed condition have been evaluated. The results show a good agreement between MUF values provided by the automatic scaling algorithm and the MUF values manually scaled by an expert operator. Furthermore, the results show the good capabilities of OIASA in discarding ionograms that lack of sufficient information.
Voltage Unbalance Compensation with Smart Three-phase Loads
DEFF Research Database (Denmark)
Douglass, Philip; Trintis, Ionut; Munk-Nielsen, Stig
2016-01-01
voltage, but it does not reduce the negative sequence voltage. The controller that uses phase-phase voltage as input eliminates negative sequence voltage, and reduces voltage deviations from the average to approximately half their initial value. Current unbalance is reduced when the voltage unbalance...... is caused by asymmetrical loads. These results suggest that the optimal algorithm to reduce system unbalance depends on which system parameter is most important: phase-neutral voltage unbalance, phase-phase voltage unbalance, or current unbalance....
Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials
Ng, Joanna L.; Knothe, Lillian E.; Whan, Renee M.; Knothe, Ulf; Tate, Melissa L. Knothe
2017-01-01
We are literally the stuff from which our tissue fabrics and their fibers are woven and spun. The arrangement of collagen, elastin and other structural proteins in space and time embodies our tissues and organs with amazing resilience and multifunctional smart properties. For example, the periosteum, a soft tissue sleeve that envelops all nonarticular bony surfaces of the body, comprises an inherently “smart” material that gives hard bones added strength under high impact loads. Yet a paucity of scalable bottom-up approaches stymies the harnessing of smart tissues’ biological, mechanical and organizational detail to create advanced functional materials. Here, a novel approach is established to scale up the multidimensional fiber patterns of natural soft tissue weaves for rapid prototyping of advanced functional materials. First second harmonic generation and two-photon excitation microscopy is used to map the microscopic three-dimensional (3D) alignment, composition and distribution of the collagen and elastin fibers of periosteum, the soft tissue sheath bounding all nonarticular bone surfaces in our bodies. Then, using engineering rendering software to scale up this natural tissue fabric, as well as multidimensional weaving algorithms, macroscopic tissue prototypes are created using a computer-controlled jacquard loom. The capacity to prototype scaled up architectures of natural fabrics provides a new avenue to create advanced functional materials.
Scale-up of nature's tissue weaving algorithms to engineer advanced functional materials.
Ng, Joanna L; Knothe, Lillian E; Whan, Renee M; Knothe, Ulf; Tate, Melissa L Knothe
2017-01-11
We are literally the stuff from which our tissue fabrics and their fibers are woven and spun. The arrangement of collagen, elastin and other structural proteins in space and time embodies our tissues and organs with amazing resilience and multifunctional smart properties. For example, the periosteum, a soft tissue sleeve that envelops all nonarticular bony surfaces of the body, comprises an inherently "smart" material that gives hard bones added strength under high impact loads. Yet a paucity of scalable bottom-up approaches stymies the harnessing of smart tissues' biological, mechanical and organizational detail to create advanced functional materials. Here, a novel approach is established to scale up the multidimensional fiber patterns of natural soft tissue weaves for rapid prototyping of advanced functional materials. First second harmonic generation and two-photon excitation microscopy is used to map the microscopic three-dimensional (3D) alignment, composition and distribution of the collagen and elastin fibers of periosteum, the soft tissue sheath bounding all nonarticular bone surfaces in our bodies. Then, using engineering rendering software to scale up this natural tissue fabric, as well as multidimensional weaving algorithms, macroscopic tissue prototypes are created using a computer-controlled jacquard loom. The capacity to prototype scaled up architectures of natural fabrics provides a new avenue to create advanced functional materials.
Multi-Scale Probability Mapping: groups, clusters and an algorithmic search for filaments in SDSS
Smith, Anthony G; Hunstead, Richard W; Pimbblet, Kevin A
2012-01-01
We have developed a multi-scale structure identification algorithm for the detection of overdensities in galaxy data that identifies structures having radii within a user-defined range. Our "multi-scale probability mapping" technique combines density estimation with a shape statistic to identify local peaks in the density field. This technique takes advantage of a user-defined range of scale sizes, which are used in constructing a coarse-grained map of the underlying fine-grained galaxy distribution, from which overdense structures are then identified. In this study we have compiled a catalogue of groups and clusters at 0.025 < z < 0.24 based on the Sloan Digital Sky Survey, Data Release 7, quantifying their significance and comparing with other catalogues. Most measured velocity dispersions for these structures lie between 50 and 400 km/s. A clear trend of increasing velocity dispersion with radius from 0.2 to 1 Mpc/h is detected, confirming the lack of a sharp division between groups and clusters. A m...
Concepts for benchmarking of homogenisation algorithm performance on the global scale
Directory of Open Access Journals (Sweden)
K. Willett
2014-06-01
Full Text Available The International Surface Temperature Initiative (ISTI is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global scale synthetic analogs to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real world data do not afford us. Hence algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.
Energy Technology Data Exchange (ETDEWEB)
Demmel, James W. [Univ. of California, Berkeley, CA (United States)
2017-09-14
This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emerging memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a
Management Of Large Scale Osmotic Dehydration Solution Using The Pearsons Square Algorithm
Directory of Open Access Journals (Sweden)
Oladejo Duduyemi
2015-01-01
Full Text Available ABSTRACT Osmotic dehydration is a widely researched and advantageous pre-treatment process in food preservation but has not enjoyed industrial acceptance because if its highly concentrated and voluminous effluent. The Pearsons square algorithm was employed to give a focussed attack to the problem by developing a user-friendly template for reconstituting effluents for recycling purposes using Java script programme. Outflow from a pilot scale plant was reactivated and introduced into a scheme of operation for continuous OD of fruits and vegetables. Screened and re-concentrated effluent were subjected to statistical analysis in comparison to initial concentrations solution at confidence limit of p0.05. The template proven to be an adequate representation of the Pearsons square algorithm it is sufficiently good in reconstituting used osmotic solutions for repetitive usage. This protocol if adopted in the industry is not only environmentally friendly but also promises significant economic improvement of OD process. Application Recycling of non-reacting media and as a template for automation in continuous OD processing.
Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks
Douik, Ahmed
2017-08-30
Consider a large-scale anonymous wireless sensor network with unknown cardinality. In such graphs, each node has no information about the network topology and only possesses a unique identifier. This paper introduces a novel distributed algorithm for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more efficient coding schemes for the network, the topology discovery provides a reliable way for routing packets. The proposed algorithm is shown to produce a cardinality estimate proportional to the best linear unbiased estimator for dense graphs and specific running times. Simulation results attest the theoretical results and reveal that, for a reasonable running time, querying a small group of nodes is sufficient to perform an estimation of 95% of the whole network. Applications of this work include estimating the number of Internet of Things (IoT) sensor devices, online social users, active protein cells, etc.
A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems
Directory of Open Access Journals (Sweden)
Xuhao Zhang
2014-01-01
Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.
Quantum adiabatic algorithm and scaling of gaps at first-order quantum phase transitions.
Laumann, C R; Moessner, R; Scardicchio, A; Sondhi, S L
2012-07-20
Motivated by the quantum adiabatic algorithm (QAA), we consider the scaling of the Hamiltonian gap at quantum first-order transitions, generally expected to be exponentially small in the size of the system. However, we show that a quantum antiferromagnetic Ising chain in a staggered field can exhibit a first-order transition with only an algebraically small gap. In addition, we construct a simple classical translationally invariant one-dimensional Hamiltonian containing nearest-neighbor interactions only, which exhibits an exponential gap at a thermodynamic quantum first-order transition of essentially topological origin. This establishes that (i) the QAA can be successful even across first-order transitions but also that (ii) it can fail on exceedingly simple problems readily solved by inspection, or by classical annealing.
The Iterative Signature Algorithm for the analysis of large scale gene expression data
Bergmann, S R; Barkai, N; Bergmann, Sven; Ihmels, Jan; Barkai, Naama
2003-01-01
We present a new approach for the analysis of genome-wide expression data. Our method is designed to overcome the limitations of traditional techniques, when applied to large-scale data. Rather than alloting each gene to a single cluster, we assign both genes and conditions to context-dependent and potentially overlapping transcription modules. We provide a rigorous definition of a transcription module as the object to be retrieved from the expression data. An efficient algorithm, that searches for the modules encoded in the data by iteratively refining sets of genes and conditions until they match this definition, is established. Each iteration involves a linear map, induced by the normalized expression matrix, followed by the application of a threshold function. We argue that our method is in fact a generalization of Singular Value Decomposition, which corresponds to the special case where no threshold is applied. We show analytically that for noisy expression data our approach leads to better classificatio...
Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms
Hasanov, Khalid
2015-11-01
© 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.
The Brera Multi-scale Wavelet (BMW) ROSAT HRI source catalog; 1, the algorithm
Lazzati, D; Rosati, P; Panzera, M R; Tagliaferri, G; Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero
1999-01-01
We present a new detection algorithm based on the wavelet transform for the analysis of high energy astronomical images. The wavelet transform, due to its multi-scale structure, is suited for the optimal detection of point-like as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the non-flat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multi-source fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique which, taking into account the correlation properties of the wavelet transf...
A quantum mechanical NMR simulation algorithm for protein-scale spin systems
Edwards, Luke J; Welderufael, Z T; Lee, Donghan; Kuprov, Ilya
2014-01-01
Nuclear magnetic resonance spectroscopy is one of the few remaining areas of physical chemistry for which polynomially scaling simulation methods have not so far been available. Here, we report such a method and illustrate its performance by simulating common 2D and 3D liquid state NMR experiments (including accurate description of spin relaxation processes) on isotopically enriched human ubiquitin - a protein containing over a thousand nuclear spins forming an irregular polycyclic three-dimensional coupling lattice. The algorithm uses careful tailoring of the density operator space to only include nuclear spin states that are populated to a significant extent. The reduced state space is generated by analyzing spin connectivity and decoherence properties: rapidly relaxing states as well as correlations between topologically remote spins are dropped from the basis set. In the examples provided, the resulting reduction in the quantum mechanical simulation time is by many orders of magnitude.
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-01-01
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget. PMID:28208730
Directory of Open Access Journals (Sweden)
Xiangyu Li
2017-02-01
Full Text Available This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430, and that it can make a system do more valuable works and make more than 99.9% use of the power budget.
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-02-08
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget.
Shenvi, Neil; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David
2013-01-01
Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral (ERI) tensor and the two-particle excitation amplitudes used in the parametric reduced density matrix (pRDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r4), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the traditional pRDM algorithm, somewhere between that of CCSD and CCSD(T).
Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David
2013-08-07
Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T).
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Let G = (V, E) be a complete undirected graph with vertex set V, edge set E, and edge weights I(e)satisfying the triangle inequality. The vertex set V is partitioned into clusters V1, V2 Vk. The clustered traveling salesman problem (CTSP) seeks to compute the shortest Hamiltonian tour that visits all the vertices, in which the vertices of each cluster are visited consecutively. A two-level genetic algorithm (TLGA) was developed for the problem, which favors neither intra-cluster paths nor inter-cluster paths, thus realized integrated evolutionary optimization for both levels of the CTSP. Results show that the algorithm is more effective than known algorithms. A large-scale traveling salesman problem (TSP) can be converted into a CTSP by clustering so that it can then be solved by the algorithm. Test results demonstrate that the clustering TLGA for large TSPs is more effective and efficient than the classical genetic algorithm.
Directory of Open Access Journals (Sweden)
Ü. Niinemets
2010-06-01
Full Text Available In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission potential under specified environmental conditions, also called as the emission factor, E_{S}. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993, instantaneous variation of the steady-state emission rate is described as the product of E_{S} and light and temperature response functions. When these models are employed in the atmospheric chemistry modeling community, species-specific E_{S} values and parameter values defining the instantaneous response curves are often taken as initially defined. In the current review, we argue that E_{S} as a characteristic used in the models importantly depends on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during experimental E_{S} determinations. In particular, there is now increasing consensus that in addition to variations in light and temperature, alterations in atmospheric and/or within-leaf CO_{2} concentrations may need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility. Because of these combined biochemical and physico-chemical drivers, specification of E_{S} as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature as used in the standard algorithms. The definition of E_{S} also varies depending on the degree of aggregation of E_{S} values in different parameterization schemes (leaf- vs. canopy- or region-scale, species vs. plant functional type levels and various
A robust and fast generic voltage sag detection technique
DEFF Research Database (Denmark)
L. Dantas, Joacillo; Lima, Francisco Kleber A.; Branco, Carlos Gustavo C.;
2015-01-01
In this paper, a fast and robust voltage sag detection algorithm, named VPS2D, is introduced. Using the DSOGI, the algorithm creates a virtual positive sequence voltage and monitories the fundamental voltage component of each phase. After calculating the aggregate value in the o:;3-reference frame......, the algorithm can rapidly identify the starting and the ending of symmetric and asymmetric voltage sags, even if there are harmonics on the grid. Simulation and experimental results are given to validate the proposed algorithm....
Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities
Directory of Open Access Journals (Sweden)
Danwen Bao
2017-01-01
Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.
Tang, Yu-Hang; Karniadakis, George; Crunch Team
2014-03-01
We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to illustrate the practicality of our code in real-world applications. This work was supported by the new Department of Energy Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). Simulations were carried out at the Oak Ridge Leadership Computing Facility through the INCITE program under project BIP017.
Energy Technology Data Exchange (ETDEWEB)
Baldwin, C; Eliassi-Rad, T; Abdulla, G; Critchlow, T
2003-04-16
As scientific data sets grow exponentially in size, the need for scalable algorithms that heuristically partition the data increases. In this paper, we describe the three-step evolution of a hierarchical partitioning algorithm for large-scale spatio-temporal scientific data sets generated by massive simulations. The first version of our algorithm uses a simple top-down partitioning technique, which divides the data by using a four-way bisection of the spatio-temporal space. The shortcomings of this algorithm lead to the second version of our partitioning algorithm, which uses a bottom-up approach. In this version, a partition hierarchy is constructed by systematically agglomerating the underlying Cartesian grid that is placed on the data. Finally, the third version of our algorithm utilizes the intrinsic topology of the data given in the original scientific problem to build the partition hierarchy in a bottom-up fashion. Specifically, the topology is used to heuristically agglomerate the data at each level of the partition hierarchy. Despite the growing complexity in our algorithms, the third version of our algorithm builds partition hierarchies in less time and is able to build trees for larger size data sets as compared to the previous two versions.
Directory of Open Access Journals (Sweden)
Yap Hoon
2017-02-01
Full Text Available In this paper, a refined reference current generation algorithm based on instantaneous power (pq theory is proposed, for operation of an indirect current controlled (ICC three-level neutral-point diode clamped (NPC inverter-based shunt active power filter (SAPF under non-sinusoidal source voltage conditions. SAPF is recognized as one of the most effective solutions to current harmonics due to its flexibility in dealing with various power system conditions. As for its controller, pq theory has widely been applied to generate the desired reference current due to its simple implementation features. However, the conventional dependency on self-tuning filter (STF in generating reference current has significantly limited mitigation performance of SAPF. Besides, the conventional STF-based pq theory algorithm is still considered to possess needless features which increase computational complexity. Furthermore, the conventional algorithm is mostly designed to suit operation of direct current controlled (DCC SAPF which is incapable of handling switching ripples problems, thereby leading to inefficient mitigation performance. Therefore, three main improvements are performed which include replacement of STF with mathematical-based fundamental real power identifier, removal of redundant features, and generation of sinusoidal reference current. To validate effectiveness and feasibility of the proposed algorithm, simulation work in MATLAB-Simulink and laboratory test utilizing a TMS320F28335 digital signal processor (DSP are performed. Both simulation and experimental findings demonstrate superiority of the proposed algorithm over the conventional algorithm.
Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems
Energy Technology Data Exchange (ETDEWEB)
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj; Haglin, David J.
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
J.F. Sturm; J. Zhang (Shuzhong)
1996-01-01
textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993; Mon
Directory of Open Access Journals (Sweden)
John M. Seiner
2009-03-01
Full Text Available An image pattern tracking algorithm is described in this paper for time-resolved measurements of mini- and micro-scale movements of complex objects. This algorithm works with a high-speed digital imaging system, which records thousands of successive image frames in a short time period. The image pattern of the observed object is tracked among successively recorded image frames with a correlation-based algorithm, so that the time histories of the position and displacement of the investigated object in the camera focus plane are determined with high accuracy. The speed, acceleration and harmonic content of the investigated motion are obtained by post processing the position and displacement time histories. The described image pattern tracking algorithm is tested with synthetic image patterns and verified with tests on live insects.
A Non-linear Scaling Algorithm Based on chirp-z Transform for Squint Mode FMCW-SAR
Directory of Open Access Journals (Sweden)
Yu Bin-bin
2012-03-01
Full Text Available A non-linear scaling chirp-z imaging algorithm for squint mode Frequency Modulated Continuous Wave Synthetic Aperture Radar (FMCW-SAR is presented to solve the problem of the focus accuracy decline. Based on the non-linear characteristics in range direction for the echo signal in Doppler domain, a non-linear modulated signal is introduced to perform a non-linear scaling based on chirp-z transform. Then the error due to range compression and range migration correction can be reduced, therefore the range resolution of radar image is improved. By using the imaging algorithm proposed, the imaging performances for point targets, compared with that from the original chirp-z algorithm, are demonstrated to be improved in range resolution and image contrast, and to be maintained the same in azimuth resolution.
Bitter, Ingmar; Brown, John E.; Brickman, Daniel; Summers, Ronald M.
2004-04-01
The presented method significantly reduces the time necessary to validate a computed tomographic colonography (CTC) computer aided detection (CAD) algorithm of colonic polyps applied to a large patient database. As the algorithm is being developed on Windows PCs and our target, a Beowulf cluster, is running on Linux PCs, we made the application dual platform compatible using a single source code tree. To maintain, share, and deploy source code, we used CVS (concurrent versions system) software. We built the libraries from their sources for each operating system. Next, we made the CTC CAD algorithm dual-platform compatible and validate that both Windows and Linux produced the same results. Eliminating system dependencies was mostly achieved using the Qt programming library, which encapsulates most of the system dependent functionality in order to present the same interface on either platform. Finally, we wrote scripts to execute the CTC CAD algorithm in parallel. Running hundreds of simultaneous copies of the CTC CAD algorithm on a Beowulf cluster computing network enables execution in less than four hours on our entire collection of over 2400 CT scans, as compared to a month a single PC. As a consequence, our complete patient database can be processed daily, boosting research productivity. Large scale validation of a computer aided polyp detection algorithm for CT colonography using cluster computing significantly improves the round trip time of algorithm improvement and revalidation.
Prior Structural Information CT Reconstruction Algorithm Based on Variable Voltage%基于结构先验的变电压 CT 成像
Institute of Scientific and Technical Information of China (English)
张雪英; 陈平; 潘晋孝
2015-01-01
为获得更高质量的CT重建图像，建立了基于结构先验的变电压CT成像方法。该方法对同一像素点在不同电压下满足最佳灰度带的有效投影数据进行叠加迭代重建，得到递变电压的重建结果，通过设定阈值，将低电压下投影数据的重建结果分为两个部分，将边缘的结构部分先验地运用到高电压下投影数据的重建结果里，以此弥补变电压重建图像过程中的信息缺失。仿真实验表明：该方法能够获得完整的工件信息，重建图像质量高，像素值也更加稳定。%The method of variable voltage CT reconstruction is to obtain projection sequence matching the effec -tive thickness of work piece under variable voltage and to construct it .Firstly, we use the efficient projection of the same pixel under different voltages meeting the best condition to reconstruct .So we can get variable voltage reconstruction .Then by setting threshold , the reconstruction of the projection data under low voltage can be di-vided into an edge portion and a non -edge section and the edge portion priors to applying to the reconstruction of the projection data under high voltage in order to make up for the missing information .Experiment shows that that this method can obtain the complete information of work piece , the quality of reconstructed image is higher and the pixel values are more stable .
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial
Wang, J.; Hastings, D. E.
1992-01-01
The paper presents the theory and particle simulation results for the ionospheric plasma flow over a large high-voltage space platform at a zero angle of attack and at a large angle of attack. Emphasis is placed on the structures in the large, high-voltage regime and the transient plasma response on the ion-plasma time scale. Special consideration is given to the transient formation of the space-charge wake and its steady-state structure.
Improving Genetic Algorithm with Fine-Tuned Crossover and Scaled Architecture
Directory of Open Access Journals (Sweden)
Ajay Shrestha
2016-01-01
Full Text Available Genetic Algorithm (GA is a metaheuristic used in solving combinatorial optimization problems. Inspired by evolutionary biology, GA uses selection, crossover, and mutation operators to efficiently traverse the solution search space. This paper proposes nature inspired fine-tuning to the crossover operator using the untapped idea of Mitochondrial DNA (mtDNA. mtDNA is a small subset of the overall DNA. It differentiates itself by inheriting entirely from the female, while the rest of the DNA is inherited equally from both parents. This unique characteristic of mtDNA can be an effective mechanism to identify members with similar genes and restrict crossover between them. It can reduce the rate of dilution of diversity and result in delayed convergence. In addition, we scale the well-known Island Model, where instances of GA are run independently and population members exchanged periodically, to a Continental Model. In this model, multiple web services are executed with each web service running an island model. We applied the concept of mtDNA in solving Traveling Salesman Problem and to train Neural Network for function approximation. Our implementation tests show that leveraging these new concepts of mtDNA and Continental Model results in relative improvement of the optimization quality of GA.
Tavakoli, Ruhollah
2010-01-01
The structure of many real-world optimization problems includes minimization of a nonlinear (or quadratic) functional subject to bound and singly linear constraints (in the form of either equality or bilateral inequality) which are commonly called as continuous knapsack problems. Since there are efficient methods to solve large-scale bound constrained nonlinear programs, it is desirable to adapt these methods to solve knapsack problems, while preserving their efficiency and convergence theories. The goal of this paper is to introduce a general framework to extend a box-constrained optimization solver to solve knapsack problems. This framework includes two main ingredients which are O(n) methods; in terms of the computational cost and required memory; for the projection onto the knapsack constrains and the null-space manipulation of the related linear constraint. The main focus of this work is on the extension of Hager-Zhang active set algorithm (SIAM J. Optim. 2006, pp. 526--557). The main reasons for this ch...
Institute of Scientific and Technical Information of China (English)
Lihui CEN; Yugeng XI
2008-01-01
By considering the flow control of urban sewer networks to minimize the electricity consumption of pumping stations.a decomposition-coordination strategy for energy savings based on network community division is developed in this paper. A mathematical model characterizing the smady-state flow of urball sewer networks is first constructed,consisting of a set of algebraic equations with the structure transportation capacities captured as constraints.Since the sewer networks have no apparent natural hierarchical structure in general.it is very difficult to identify the clustered groups.A fast network division approach through calculating the betweenness of each edge is successfully applied to identify the groups and a sewer network with arbitrary configuration could be then decomposed into subnetworks.By integrating the coupling constraints of the subnetworks.the original problem is separated into N optimization subproblems in accordance with the network decomposition.Each subproblem is solved locally and the solutions to the subproblems are coordinated to form an appropriate global solution.Finally,an application to a specified large-scale sewer network is also investigated to demonstrate the validity of the proposed algorithm.
Energy Technology Data Exchange (ETDEWEB)
Vasek, P. [Institute of Physics ASCR, Cukrovarnicka 10, 162 53 Prague 6 (Czech Republic)]. E-mail: vasek@fzu.cz; Shimakage, H. [KARC, National Institute of Information and Communication Technology, 588-2 Iwaoka, Kobe, 651-2492 (Japan); Wang, Z. [KARC, National Institute of Information and Communication Technology, 588-2 Iwaoka, Kobe, 651-2492 (Japan)
2004-09-15
The longitudinal and transverse voltages (resistances) have been measured for MgB{sub 2} in zero external magnetic fields. Samples were prepared in the form of thin film and patterned into the usual Hall bar shape. In close vicinity of the critical temperature T{sub c} non-zero transverse resistance has been observed. Its dependence on the transport current has been also studied. New scaling between transverse and longitudinal resistivities has been observed in the form {rho}{sub xy} {approx} d{rho}{sub xx}/dT. Several models for explanation of the observed transverse resistances and breaking of reciprocity theorem are discussed. One of the most promising explanation is based on the idea of time-reversal symmetry violation.
Alawasa, Khaled Mohammad
Voltage-source converters (VSCs) have gained widespread acceptance in modern power systems. The stability and dynamics of power systems involving these devices have recently become salient issues. In the small-signal sense, the dynamics of VSC-based systems is dictated by its incremental output impedance, which is formed by a combination of 'passive' circuit components and 'active' control elements. Control elements such as control parameters, control loops, and control topologies play a significant role in shaping the impedance profile. Depending on the control schemes and strategies used, VSC-based systems can exhibit different incremental impedance dynamics. As the control elements and dynamics are involved in the impedance structure, the frequency-dependent output impedance might have a negative real-part (i.e., a negative resistance). In the grid-connected mode, the negative resistance degrades the system damping and negatively impacts the stability. In high-voltage networks where high-power VSC-based systems are usually employed and where sub-synchronous dynamics usually exist, integrating large VSC-based systems might reduce the overall damping and results in unstable dynamics. The objectives of this thesis are to (1) investigate and analyze the output impedance properties under different control strategies and control functions, (2) identify and characterize the key contributors to the impedance and sub-synchronous damping profiles, and (3) propose mitigation techniques to minimize and eliminate the negative impact associated with integrating VSC-based systems into power systems. Different VSC configurations are considered in this thesis; in particular, the full-scale and partial-scale topologies (doubly fed-induction generators) are addressed. Additionally, the impedance and system damping profiles are studied under two different control strategies: the standard vector control strategy and the recently-developed power synchronization control strategy
National Research Council Canada - National Science Library
Sayed Mohammad Ebrahim Sahraeian; Byung-Jun Yoon
2013-01-01
.... We demonstrate that the proposed algorithm, called SMETANA, outperforms many state-of-the-art network alignment techniques, in terms of computational efficiency, alignment accuracy, and scalability...
National Research Council Canada - National Science Library
Sahraeian, Sayed Mohammad Ebrahim; Yoon, Byung-Jun
2013-01-01
.... We demonstrate that the proposed algorithm, called SMETANA, outperforms many state-of-the-art network alignment techniques, in terms of computational efficiency, alignment accuracy, and scalability...
Event-chain algorithm for the Heisenberg model: Evidence for z ≃1 dynamic scaling
Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji
2015-12-01
We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z ≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z ≃2 .
Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.
Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji
2015-12-01
We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.
DEFF Research Database (Denmark)
Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup
2011-01-01
The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology...... of the networks. There have been many use of evolutionary algorithms to solve the problems which are in combinatory complexity nature, and extremely hard to solve by exact approaches. Both Genetic and Simulated annealing algorithms are similar in using controlled stochastic method to search the solution....... The paper combines the algorithms in order to analyze the impact of implementation performance....
Liu, Rengli; Wang, Yanfei
2016-04-01
An extended nonlinear chirp scaling (NLCS) algorithm is proposed to process data of highly squinted, high-resolution, missile-borne synthetic aperture radar (SAR) diving with a constant acceleration. Due to the complex diving movement, the traditional signal model and focusing algorithm are no longer suited for missile-borne SAR signal processing. Therefore, an accurate range equation is presented, named as the equivalent hyperbolic range model (EHRM), which is more accurate and concise compared with the conventional fourth-order polynomial range equation. Based on the EHRM, a two-dimensional point target reference spectrum is derived, and an extended NLCS algorithm for missile-borne SAR image formation is developed. In the algorithm, a linear range walk correction is used to significantly remove the range-azimuth cross coupling, and an azimuth NLCS processing is adopted to solve the azimuth space variant focusing problem. Moreover, the operations of the proposed algorithm are carried out without any interpolation, thus having small computational loads. Finally, the simulation results and real-data processing results validate the proposed focusing algorithm.
Institute of Scientific and Technical Information of China (English)
谢开贵; 周平; 周家启; 孙渝江; 龙小平
2001-01-01
提出一种中压配电系统可靠性评估算法。该算法对复杂的中压配电系统(带子馈线)有较强的处理能力，利用前向搜索算法确定断路器动作影响范围，用故障扩散方法确定故障隔离的范围，从而确定节点的故障类型。根据故障的类型，便可形成相应的节点、馈线以及系统的可靠性指标。以RBTS—Bus6，RBTS—Bus2等配电网络和大量实际运行网络验证了该方法的有效性和实用性。%A reliability evaluation algorithm for medium voltage radial distribution network is proposed. The algorithm is suitable for evaluating the relatively complex systems which consist of many sub-feeders. It employes ahead-searching-method to determine the influencing area of breaker, applies fault-spreading-method to determine disconnection area, based on which the failure types of nodes can be determined. Then the reliability indices of nodes, feeders and system can be calculated. The RBTS-Bus6, RBTS-Bus2 and other medium voltage radial distribution networks are evaluated by using the algorithm, which verifies the effectiveness of the proposed algorithm.
Serious injury prediction algorithm based on large-scale data and under-triage control.
Nishimoto, Tetsuya; Mukaigawa, Kosuke; Tominaga, Shigeru; Lubbe, Nils; Kiuchi, Toru; Motomura, Tomokazu; Matsumoto, Hisashi
2017-01-01
The present study was undertaken to construct an algorithm for an advanced automatic collision notification system based on national traffic accident data compiled by Japanese police. While US research into the development of a serious-injury prediction algorithm is based on a logistic regression algorithm using the National Automotive Sampling System/Crashworthiness Data System, the present injury prediction algorithm was based on comprehensive police data covering all accidents that occurred across Japan. The particular focus of this research is to improve the rescue of injured vehicle occupants in traffic accidents, and the present algorithm assumes the use of an onboard event data recorder data from which risk factors such as pseudo delta-V, vehicle impact location, seatbelt wearing or non-wearing, involvement in a single impact or multiple impact crash and the occupant's age can be derived. As a result, a simple and handy algorithm suited for onboard vehicle installation was constructed from a sample of half of the available police data. The other half of the police data was applied to the validation testing of this new algorithm using receiver operating characteristic analysis. An additional validation was conducted using in-depth investigation of accident injuries in collaboration with prospective host emergency care institutes. The validated algorithm, named the TOYOTA-Nihon University algorithm, proved to be as useful as the US URGENCY and other existing algorithms. Furthermore, an under-triage control analysis found that the present algorithm could achieve an under-triage rate of less than 10% by setting a threshold of 8.3%.
Summary on Voltage Stability of Power Grid in Large-scale Wind Power Area%大规模风电地区电网电压稳定研究综述
Institute of Scientific and Technical Information of China (English)
林泽坤; 彭显刚; 武小梅; 欧英龙
2014-01-01
Rapid development of wind power is more and more threatening voltage stability of power grid in wind power are-a.Therefore,this paper summarizes problem of voltage stability of power grid in large-scale wind power area.It introduces wind power generation model,voltage change calculation for wind power integration,low voltage ride through capability of wind power generator and requirements for low voltage ride through for wind power generator in our country.It analyzes factors affecting voltage stability in wind power area and methods for voltage stability.In addition,it discusses measures for solving voltage stability of power grid in large-scale wind power area.It suggests to strengthen study on voltage stability of power grid in large-scale wind power area in order to ensure safety of power grid in wind power area in aspects of wind pow-er forecast,study on voltage transient wave for wind power integration point at fault time,countermeasures study after wind power generator taking off the grid and coordinated voltage control strategies by thermal power and wind power.%风电的快速发展对风电地区电网电压稳定性威胁越来越大，为此，对国内外大规模风电地区电网电压稳定问题进行了综述。介绍了风力发电模型以及风电并网的电压变化计算以及风机的低电压穿越能力和我国对风机的低电压穿越能力的要求，分析影响风电地区电压稳定的因素和风电地区电网电压稳定的方法，并探讨解决大规模风电地区电网的电压稳定的措施。建议应从风力预测、故障时对风电并网点的电压暂态波形研究、风机脱网后的对应策略研究和火电与风电的配合控制电压策略，加强对大规模风电地区电网电压稳定性研究，保障风电地区的电网安全。
Ren, Hongwu; Dekany, Richard; Britton, Matthew
2005-05-01
We propose a new recursive filtering algorithm for wave-front reconstruction in a large-scale adaptive optics system. An embedding step is used in this recursive filtering algorithm to permit fast methods to be used for wave-front reconstruction on an annular aperture. This embedding step can be used alone with a direct residual error updating procedure or used with the preconditioned conjugate-gradient method as a preconditioning step. We derive the Hudgin and Fried filters for spectral-domain filtering, using the eigenvalue decomposition method. Using Monte Carlo simulations, we compare the performance of discrete Fourier transform domain filtering, discrete cosine transform domain filtering, multigrid, and alternative-direction-implicit methods in the embedding step of the recursive filtering algorithm. We also simulate the performance of this recursive filtering in a closed-loop adaptive optics system.
Voltage Fluctuation Detection and Tracking Based on Adaptive Filtering Algorithm%基于自适应滤波算法的电压波动检测与跟踪
Institute of Scientific and Technical Information of China (English)
王伟; 张广明; 王祥华
2011-01-01
With the development of industry, the problem of grid voltage fluctuation has become increasingly serious, and detecting and tracking voltage fluctuation has become an important task for the electricity sectors. Based on the a-daptive filter theory, a new QR - decomposition recursive least squares algorithm is proposed. This algorithm not only improves the numerical stability, but also keeps the characteristic of the fast convergence of the traditional recursive least squares algorithm. Then, with the introduction of systolic array, Givens rotation procedure in the QR -decomposition was parallelled processing in a highly streamlined way. By this way, the efficiency and real -time applications of the algorithm was improved. Finally, the simulation of the power system voltage fluctuation detection and tracking is completed with the Matlab, and the simulation results verified the feasibility and effectiveness of the presented method.%随着工业发展电网电压波动问题变得日益严重,检测与跟踪电压波动已成为供用电部门的一项重要任务.为此,在自适应滤波理论的基础上,提出一种新的QR分解递归最小二乘算法.该算法不但提高了数值稳定性,还保持了传统递归最小二乘算法的快速收敛性.然后,引入脉动阵列以高度流水线化的方式对QR分解中的Givens旋转步骤进行并行处理,从而提高了算法的执行效率,使其具有了更好的实时性.最后,通过Matlab完成对电力系统电压波动检测与跟踪的仿真,仿真结果验证了该方法的可行性和有效性.
Directory of Open Access Journals (Sweden)
R. Rajaram
2015-11-01
Full Text Available Network reconfiguration which is constrained non linear optimization problem has been solved for loss minimization, load balancing, etc. for last two decades using various heuristic search evolutionary algorithms like binary particle swarm optimization, neuro-fuzzy techniques, etc. The contribution of this paper lies in considering distributed generation which are smaller power sources like solar photovoltaic cells or wind turbines connected in the customer roof top. This new connection in the radial network has made unidirectional current flow to become bidirectional there by increasing the efficiency but sometimes reducing stability of the system. Modified plant growth simulation algorithm has been applied here successfully to minimize real power loss because it does not require barrier factors or cross over rates because the objectives and constraints are dealt separately. The main advantage of this algorithm is continuous guiding search along with changing objective function because power from distributed generation is continuously varying so this can be applied for real time applications with required modifications. This algorithm here is tested for a standard 33 bus radial distribution system for loss minimization and test results here shows that this algorithm is efficient and suitable for real time applications.
A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations
Energy Technology Data Exchange (ETDEWEB)
Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-01-01
Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N^{3}) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix, based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.
Unbalanced Voltage Compensation in Low Voltage Residential AC Grids
DEFF Research Database (Denmark)
Trintis, Ionut; Douglass, Philip; Munk-Nielsen, Stig
2016-01-01
This paper describes the design and test of a control algorithm for active front-end rectifiers that draw power from a residential AC grid to feed heat pump loads. The control algorithm is able to control the phase to neutral or phase to phase RMS voltages at the point of common coupling....... The voltage control was evaluated with either active or reactive independent phase load current control. The control performance in field operation in a residential grid situated in Bornholm, Denmark was investigated for different use cases....
Chong Fan; Xushuai Chen; Lei Zhong; Min Zhou; Yun Shi; Yulin Duan
2017-01-01
A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This ...
Energy Technology Data Exchange (ETDEWEB)
Jimenez, Edward Steven,
2013-09-01
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
Bandyopadhyay, Saptarshi
Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm
Institute of Scientific and Technical Information of China (English)
Liang Zhu
2014-01-01
Objective To investigate the image quality, radiation dose and diagnostic value of the low-tube-voltage high-pitch dual-source computed tomography (DSCT) with sinogram affirmed iterative reconstruction (SAFIRE) for non-enhanced abdominal and pelvic scans. Methods This institutional review board-approved prospective study included 64 patients who gave written informed consent for additional abdominal and pelvic scan with DSCT in the period from November to December 2012. The patients underwent standard non-enhanced CT scans (protocol 1) [tube voltage of 120 kVp/pitch of 0.9/filtered back-projection (FBP) reconstruction] followed by high-pitch non-enhanced CT scans (protocol 2) (100 kVp/3.0/SAFIRE). The total scan time, mean CT number, signal-to-noise ratio (SNR), image quality, lesion detectability and radiation dose were compared between the two protocols. Results The total scan time of protocol 2 was significantly shorter than that of protocol 1 (1.4±0.1 seconds vs. 7.6±0.6 seconds, P Conclusion The high-pitch DSCT with SAFIRE can shorten scan time and reduce radiation dose while preserving image quality in non-enhanced abdominal and pelvic scans.
Institute of Scientific and Technical Information of China (English)
姜喜瑞; 贺之渊; 汤广福; 谢敏华; 刘栋
2013-01-01
This paper is written on the valve base control strategy specifically for high-voltage large-capacity modular multi-level converter (MMC) and on the sub-module capacitor voltage balance control strategy using tabu search optimization algorithm. In the context of high-voltage large-capacity VSC-HVDC transmission system, the paper analyzes MMC topology and related valve base control technique. The criteria for sub-module capacitor voltage balance algorithm are brought forward; the switching principle of sub-module pulse distribution is given in-depth study. Based on a distributed system structure, a novel sub-module voltage balance model using tabu search optimization algorithm is presented applicable to VSC-HVDC valve base control technique. In order to set up the objective function and constraints, establish the state decision optimization model, and formulate the optimization algorithm process, this model considers various factors, such as sub-module switch- on/off, energy fluctuations, and key parameters quintuple information tree. Simulation test and off-line test were carried out on the dynamic simulation platform employing PSCAD/EMTDC software. The test results showed that this approach was as good as nearest level approximation in terms of functionality and performance reliability, thus providing theoretical support and engineering basis.% 针对高压大容量模块化多电平换流器(modular multilevel converter，MMC)的阀基控制技术策略，研究基于禁忌搜索优化算法的子模块电容电压平衡控制策略。以高压大容量柔性直流输电系统为应用背景，对MMC技术及其阀基控制技术进行分析研究，通过对MMC技术子模块电压平衡策略的研究，提出了子模块电容电压平衡算法的评判指标，并对子模块脉冲分配投切机理进行了深入的分析研究；在分布式系统架构基础上，提出了一种新颖的适用于高压大容量柔性直流输电系统阀基控制技
Energy Technology Data Exchange (ETDEWEB)
Adams, Mark F [Department of Applied Physics and Applied Mathematics, Columbia University (United States); Ku, Seung-Hoe; Chang, C-S [Courant Institute of Mathematical Sciences, New York University (United States); Worley, Patrick; D' Azevedo, Ed [Computer Science and Mathematics Division, Oak Ridge National Laboratory (United States); Cummings, Julian C, E-mail: mark.adams@columbia.ed, E-mail: sku@cims.nyu.ed, E-mail: worleyph@ornl.go, E-mail: dazevedoef@ornl.go, E-mail: cummings@cacr.caltech.ed, E-mail: cschang@cims.nyu.ed [Center for Advanced Computing Research, California Institute of Technology (United States)
2009-07-01
Particle-in-cell (PIC) methods have proven to be effective in discretizing the Vlasov-Maxwell system of equations describing the core of toroidal burning plasmas for many decades. Recent physical understanding of the importance of edge physics for stability and transport in tokamaks has lead to development of the first fully toroidal edge PIC code - XGC1. The edge region poses special problems in meshing for PIC methods due to the lack of closed flux surfaces, which makes field-line following meshes and coordinate systems problematic. We present a solution to this problem with a semi-field line following mesh method in a cylindrical coordinate system. Additionally, modern supercomputers require highly concurrent algorithms and implementations, with all levels of the memory hierarchy being efficiently utilized to realize optimal code performance. This paper presents a mesh and particle partitioning method, suitable to our meshing strategy, for use on highly concurrent cache-based computing platforms.
Robust multi-scale clustering of large DNA microarray datasets with the consensus algorithm
DEFF Research Database (Denmark)
Grotkjær, Thomas; Winther, Ole; Regenberg, Birgitte
2006-01-01
Motivation: Hierarchical and relocation clustering (e.g. K-means and self-organizing maps) have been successful tools in the display and analysis of whole genome DNA microarray expression data. However, the results of hierarchical clustering are sensitive to outliers, and most relocation methods...... analysis by collecting re-occurring clustering patterns in a co-occurrence matrix. The results show that consensus clustering obtained from clustering multiple times with Variational Bayes Mixtures of Gaussians or K-means significantly reduces the classification error rate for a simulated dataset....... The method is flexible and it is possible to find consensus clusters from different clustering algorithms. Thus, the algorithm can be used as a framework to test in a quantitative manner the homogeneity of different clustering algorithms. We compare the method with a number of state-of-the-art clustering...
Spanning tree-based algorithm for hydraulic simulation of large-scale water supply networks
Directory of Open Access Journals (Sweden)
Huan-feng DUAN
2010-03-01
Full Text Available With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by expanding the traditional loop-equation theory through utilization of the advantages of the graph theory in efficiency. The utilization of the spanning tree technique from graph theory makes the proposed algorithm efficient in calculation and simple to use for computer coding. The algorithms for topological generation and practical implementations are presented in detail in this paper. Through the application to a practical urban system, the consumption of the CPU time and computation memory were decreased while the accuracy was greatly enhanced compared with the present existing methods.
[An adaptive scaling hybrid algorithm for reduction of CT artifacts caused by metal objects].
Chen, Yu; Luo, Hai; Zhou, He-qin
2009-03-01
A new adaptively hybrid filtering algorithm is proposed to reduce the artifacts caused by metal in CT image. Firstly, the method is used to preprocess the projection data of metal region and is reconstruct by filtered back projection (FBP) method. Then the expectation maximization algorithm (EM) is performed on the iterative original metal project data. Finally, a compensating procedure is applied to the reconstructed metal region. The simulation result has demonstrated that the proposed algorithm can remove the metal artifacts and keep the structure information of metal object effectively. It ensures that the tissues around the metal will not be distorted. The method is also computational efficient and effective for the CT images which contains several metal objects.
Directory of Open Access Journals (Sweden)
Angely Cárcamo-Gallardo
2007-04-01
Full Text Available En este trabajo se presenta un nuevo algoritmo que permite reconfigurar un sistema de distribución (SD de energía eléctrica minimizando la energía no suministrada (ENS. El SD se modela utilizando teoría de grafos, mientras que la ENS se formula recursivamente y se parametriza en términos de los índices de confiabilidad del SD. Empleando esta modelación se transforma el problema de optimización en el problema de encontrar el árbol de mínima expansión (AME a partir del grafo que modela al SD, donde la métrica de distancia utilizada corresponde a la ENS a cada nodo del SD. Para encontrar de manera eficiente el AME se utiliza el algoritmo de Prim, ya que pertenece a la clase de algoritmos voraces en el cálculo del AME. Adicionalmente, se propone un algoritmo que realiza una revisión del AME obtenido analizando las topologías que fueron descartadas aleatoriamente durante el proceso de decisión. El desempeño del algoritmo de optimización se evalúa en sistemas de pruebas y en dos sistemas eléctricos reales.This paper presents a novel algorithm to reconfigure an electric power distribution network (EPDN, minimizing its non-supplied energy (NSE. The EPDN is modeled using graph theory and the NSE is recursively formulated in terms of the reliability parameters of the EPDN. Based on this mathematical model, we transform the original optimization problem into the graph theory problem of finding the minimum spanning tree (MST of a given graph, which models the EPDN. The distance metric employed by the searching algorithm is the NSE. In order to efficiently find the MST, Prim's algorithm is employed due to is greedy search behavior. In addition, a backtracking algorithm is used to check the MST obtained. The backtracking algorithm analyzes all the candidate topologies that were randomly discarded during the decision process. The performance of the optimization algorithm is evaluated using testing systems and two actual EPDNs.
Institute of Scientific and Technical Information of China (English)
Xin He; Gui-Hai Yan; Yin-He Han; Xiao-Wei Li
2016-01-01
The load power range of modern processors is greatly enlarged because many advanced power management techniques are employed, such as dynamic voltage frequency scaling, Turbo Boosting, and near-threshold voltage (NTV) technologies. However, because the eﬃciency of power delivery varies greatly with different load conditions, conventional power delivery designs cannot maintain high eﬃciency over the entire voltage spectrum, and the gained power saving may be offset by power loss in power delivery. We propose SuperRange, a wide operational range power delivery unit. SuperRange complements the power delivery capability of on-chip voltage regulator and off-chip voltage regulator. On top of SuperRange, we analyze its power conversion characteristics and propose a voltage regulator (VR) aware power management algorithm. Moreover, as more and more cores have been integrated on a singe chip, multiple SuperRange units can serve as basic building blocks to build, in a highly scalable way, more powerful power delivery subsystem with larger power capacity. Experimental results show SuperRange unit offers 1x and 1.3x higher power conversion eﬃciency (PCE) than other two conventional power delivery schemes at NTV region and exhibits an average 70%PCE over entire operational range. It also exhibits superior resilience to power-constrained systems.
Institute of Scientific and Technical Information of China (English)
DING Yingqiang; DU Liufeng; YANG Ting; SUN Yugeng
2009-01-01
Sensor localization is crucial for the configuration and applications of wireless sensor network (WSN). A novel distributed localization algorithm, MDS-DC was proposed for wireless sensor network based on multidi-mensional scaling (MDS) and the shortest path distance correction. In MDS-DC, several local positioning regions with reasonable distribution were firstly constructed by an adaptive search algorithm, which ensures the mergence between the local relative maps of the adjacent local position regions and can reduce the number of common nodes in the network. Then, based on the relationships between the estimated distances and actual distances of anchors, the distance estimation vectors of sensors around anchors were corrected in each local positioning region. During the computations of the local relative coordinates, an iterative process, which is the combination of classical MDS algorithm and SMACOF algorithm, was applied. Finally, the global relative positions or absolute positions of sen-sors were obtained through merging the relative maps of all local positioning regions. Simulation results show that MDS-DC has better performances in positioning precision, energy efficiency and robustness to range error, which can meet the requirements of applications for sensor localization in WSN.
Dong, Hao; Nie, Yu-Feng; Cui, Jun-Zhi; Wu, Ya-Tao
2015-09-01
We study the hyperbolic-parabolic equations with rapidly oscillating coefficients. The formal second-order two-scale asymptotic expansion solutions are constructed by the multiscale asymptotic analysis. In addition, we theoretically explain the importance of the second-order two-scale solution by the error analysis in the pointwise sense. The associated explicit convergence rates are also obtained. Then a second-order two-scale numerical method based on the Newmark scheme is presented to solve the equations. Finally, some numerical examples are used to verify the effectiveness and efficiency of the multiscale numerical algorithm we proposed. Project supported by the National Natural Science Foundation of China (Grant No. 11471262), the National Basic Research Program of China (Grant No. 2012CB025904), and the State Key Laboratory of Science and Engineering Computing and the Center for High Performance Computing of Northwestern Polytechnical University, China.
CSIR Research Space (South Africa)
Du Plessis, WP
2011-09-01
Full Text Available The use of the density-taper approach to initialise a genetic algorithm is shown to give excellent results in the synthesis of thinned arrays. This approach is shown to give better SLL values more consistently than using random values and difference...
Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms
Bosman, P.A.N.; Thierens, D.; Thierens, D.
2007-01-01
Recent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we argue that th
Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms
P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)
2007-01-01
htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we
Institute of Scientific and Technical Information of China (English)
崔凤仙; 刘阳
2011-01-01
针对配电网络规划中出现的中压配电站容量和位置不确定的情况,提出将中压配电站容量和位置连同网架结构、导线型号、线路回数等一起作为变量,采用整数编码与矩阵实数编码相结合的双重编码方式进行中压配电网络规划,其中整数编码用以确定网架结构、导线型号以及线路回数,而矩阵实数编码用以调整虚拟负荷点所带负荷量。设计了用于该规划的各项免疫遗传算法操作,并通过算例验证了该算法的有效性。%Considering the uncertainty of the capacities and the locations of the medium-voltage distribution stations in contribution network planning,this paper proposes a method for the planning,which employs a dual encoding way that takes the capacities and positions of the distribution stations,together with the structure of network,the types of the transmission lines and the number of circuit lines as parameters,and combines the integer encoding and real-matrix encoding.Here the integer encoding is for the structure of network,the types of the transmission lines and the number of circuit lines while the real-matrix encoding is for optimizing loads on the virtual load points.The operators of immune genetic algorithm for medium-voltage network planning are designed and applied to a real power distribution system,and it proves that the algorithm is effective.
Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.
2016-11-01
Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.
Institute of Scientific and Technical Information of China (English)
Jian Zhao,Na Zhang,Jian Jia,; Huanwei Wang
2015-01-01
Contraposing the need of the robust digital watermark for the copyright protection field, a new digital watermarking algo-rithm in the non-subsampled contourlet transform (NSCT) domain is proposed. The largest energy sub-band after NSCT is selected to embed watermark. The watermark is embedded into scale-invariant feature transform (SIFT) regions. During embedding, the initial region is divided into some cirque sub-regions with the same area, and each watermark bit is embedded into one sub-region. Extensive simulation results and comparisons show that the algo-rithm gets a good trade-off of invisibility, robustness and capacity, thus obtaining good quality of the image while being able to effec-tively resist common image processing, and geometric and combo attacks, and normalized similarity is almost al reached.
Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K
2016-01-01
The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale
Emmons, Scott; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie network clustering. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms -- Blondel, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 o...
The DOHA algorithm: a new recipe for cotrending large-scale transiting exoplanet survey light curves
Mislis, D.; Pyrzas, S.; Alsubai, K. A.; Tsvetanov, Z. I.; Vilchez, N. P. E.
2017-03-01
We present DOHA, a new algorithm for cotrending photometric light curves obtained by transiting exoplanet surveys. The algorithm employs a novel approach to the traditional 'differential photometry' technique, by selecting the most suitable comparison star for each target light curve, using a two-step correlation search. Extensive tests on real data reveal that DOHA corrects both intra-night variations and long-term systematics affecting the data. Statistical studies conducted on a sample of ∼9500 light curves from the Qatar Exoplanet Survey reveal that DOHA-corrected light curves show an rms improvement of a factor of ∼2, compared to the raw light curves. In addition, we show that the transit detection probability in our sample can increase considerably, even up to a factor of 7, after applying DOHA.
The DOHA algorithm: a new recipe for cotrending large-scale transiting exoplanet survey light curves
Mislis, D; Alsubai, K A; Tsvetanov, Z I; Vilchez, N P E
2016-01-01
We present DOHA, a new algorithm for cotrending photometric light curves obtained by transiting exoplanet surveys. The algorithm employs a novel approach to the traditional "differential photometry" technique, by selecting the most suitable comparison star for each target light curve, using a two-step correlation search. Extensive tests on real data reveal that DOHA corrects both intra-night variations and long-term systematics affecting the data. Statistical studies conducted on a sample of 9500 light curves from the Qatar Exoplanet Survey reveal that DOHA-corrected light curves show an RMS improvement of a factor of 2, compared to the raw light curves. In addition, we show that the transit detection probability in our sample can increase considerably, even up to a factor of 7, after applying DOHA.
Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department
2007-01-01
The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...
Indian Academy of Sciences (India)
Sangeetha S; S Jeevananthan
2015-12-01
Genetic Algorithms (GA) has always done justice to the art of optimization. One such endeavor has been made in employing the roots of GA in a most proficient way to determine the switching moments of a cascaded H-bridge seven level inverter with equal DC sources. Evolutionary techniques have proved themselves efficient to solve such an obscurity. GA is one of the methods to achieve the objective through biological mimicking. The extraordinary property of crossover is extracted using Random 3-Point Neighbourhood Crossover (RPNC) and Multi Midpoint Selective Bit Neighbourhood crossover (MMSBNC). This paper deals with solving of the selective harmonic equations (SHE) using binary coded GA specific to knowledge based neighbourhood multipoint crossover technique. This is directly related to the switching moments of the multilevel inverter under consideration. Although the previous root-finding techniques such as N-R or resultant like methods endeavor the same, the latter offers faster convergence, better program reliability and wide range of solutions. With an acute algorithm developed in Turbo C, the switching moments are calculated offline. The simulation results closely agree with the hardware results.
FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads.
Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya
2012-06-01
The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith-Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets.
AN EFFECTIVE CONTINUOUS ALGORITHM FOR APPROXIMATE SOLUTIONS OF LARGE SCALE MAX-CUT PROBLEMS
Institute of Scientific and Technical Information of China (English)
Cheng-xian Xu; Xiao-liang He; Feng-min Xu
2006-01-01
An effective continuous algorithm is proposed to find approximate solutions of NP-hard max-cut problems. The algorithm relaxes the max-cut problem into a continuous nonlinear programming problem by replacing n discrete constraints in the original problem with one single continuous constraint. A feasible direction method is designed to solve the resulting nonlinear programming problem. The method employs only the gradient evaluations of the objective function, and no any matrix calculations and no line searches are required.This greatly reduces the calculation cost of the method, and is suitable for the solution of large size max-cut problems. The convergence properties of the proposed method to KKT points of the nonlinear programming are analyzed. If the solution obtained by the proposed method is a global solution of the nonlinear programming problem, the solution will provide an upper bound on the max-cut value. Then an approximate solution to the max-cut problem is generated from the solution of the nonlinear programming and provides a lower bound on the max-cut value. Numerical experiments and comparisons on some max-cut test problems (small and large size) show that the proposed algorithm is efficient to get the exact solutions for all small test problems and well satisfied solutions for most of the large size test problems with less calculation costs.
Energy Technology Data Exchange (ETDEWEB)
Buls, Nico; Gompel, Gert van; Nieboer, Koenraad; Willekens, Inneke; Mey, Johan de [Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels (Belgium); Vrije Universiteit Brussel (VUB), Research group LABO, Brussel (Belgium); Cauteren, Toon van [Vrije Universiteit Brussel (VUB), Research group LABO, Brussel (Belgium); Verfaillie, Guy [Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels (Belgium); Evans, Paul; Macholl, Sven; Newton, Ben [GE Healthcare, Department of Medical Diagnostics, Amersham, Buckinghamshire (United Kingdom)
2015-04-01
To assess image quality in abdominal CT at low tube voltage combined with two types of iterative reconstruction (IR) at four reduced contrast agent dose levels. Minipigs were scanned with standard 320 mg I/mL contrast concentration at 120 kVp, and with reduced formulations of 120, 170, 220 and 270 mg I/mL at 80 kVp with IR. Image quality was assessed by CT value, dose normalized contrast and signal to noise ratio (CNRD and SNRD) in the arterial and venous phases. Qualitative analysis was included by expert reading. Protocols with 170 mg I/mL or higher showed equal or superior CT values: aorta (278-468 HU versus 314 HU); portal vein (205-273 HU versus 208 HU); liver parenchyma (122-146 HU versus 115 HU). In the aorta, all 170 mg I/mL protocols or higher yielded equal or superior CNRD (15.0-28.0 versus 13.7). In liver parenchyma, all study protocols resulted in higher SNRDs. Radiation dose could be reduced from standard CTDI{sub vol} = 7.8 mGy (6.2 mSv) to 7.6 mGy (5.2 mSv) with 170 mg I/mL. Combining 80 kVp with IR allows at least a 47 % contrast agent dose reduction and 16 % radiation dose reduction for images of comparable quality. (orig.)
Institute of Scientific and Technical Information of China (English)
杨银国; 林舜江; 欧阳逸风; 刘明波; 温柏坚; 辛拓
2013-01-01
在Matlab环境下，借助电力系统分析工具箱(PSAT)实现了包含三级电压控制系统的大电网动态时域仿真，并分析其对广东电网暂态电压安全性的影响。结果表明该系统的二级电压控制紧急动作模式虽然有利于广东电网故障后的暂态电压恢复，但不能阻止暂态电压不安全事故的发生。进而，建立了针对暂态电压不安全故障的紧急切负荷控制优化模型，通过采用轨迹灵敏度法将动态优化模型转化为线性规划模型以获得紧急切负荷控制策略，动态仿真结果验证了所得控制策略能够使广东电网在严重故障后恢复暂态电压安全。%In Matlab environment and using power system analysis tools (PSAT) software, the dynamic time-domain simulation of large-scale power grid with three-level voltage control system is implemented, and the influences of three-level voltage control system on transient voltage security of Guangdong power grid are analyzed. Analysis results show that the emergency action mode of secondary voltage control in the three-level voltage control system contributes to the post-fault transient voltage recovery of load buses in Guangdong power grid, however it cannot prevent the occurrence of transient voltage insecurity. For this reason, in allusion to transient voltage insecurity accident an optimal control strategy for emergency load shedding, in which the trajectory sensitivity is utilized to turn the dynamic optimal model into linear programming model to achieve emergency load shedding control strategy, is established, and it is verified by dynamic simulation that after serious fault the transient voltage security of Guangdong power grid can be recovered by the achieved control strategy.
Directory of Open Access Journals (Sweden)
TRIFINA, L.
2011-02-01
Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Implementation Issues for Algorithmic VLSI (Very Large Scale Integration) Processor Arrays.
1984-10-01
analysis of the various algorithms are described in Appendiccs 5.A, 5.B and 5.C. A note on notation: Following Ottmann ei aL [40], the variable n is used...redundant operations OK. Ottmann log i I log 1 up to n wasted processors. X-tree topology. Atallah log n I 1 redundant operations OK. up to n wasted...for Computing Machinery 14(2):203-241, April, 1967. 40] Thomas A. Ottmann , Arnold L. Rosenberg and Larry J. Stockmeyer. A dictionary machine (for VLSI
High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms
Hasanov, Khalid
2014-01-01
There has been a significant research in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research works are done to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of Grid’5000 platform are presented.
An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers
Energy Technology Data Exchange (ETDEWEB)
Balman, Mehmet; Kosar, Tevfik
2010-05-20
Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.
Directory of Open Access Journals (Sweden)
Wei Yang
2016-12-01
Full Text Available Based on the terrain observation by progressive scans (TOPS mode, an efficient full-aperture image formation algorithm for focusing wide-swath spaceborne TOPS data is proposed. First, to overcome the Doppler frequency spectrum aliasing caused by azimuth antenna steering, the range-independent derotation operation is adopted, and the signal properties after derotation are derived in detail. Then, the azimuth deramp operation is performed to resolve image folding in azimuth. The traditional dermap function will introduce a time shift, resulting in appearance of ghost targets and azimuth resolution reduction at the scene edge, especially in the wide-swath coverage case. To avoid this, a novel solution is provided using a modified range-dependent deramp function combined with the chirp-z transform. Moreover, range scaling and azimuth scaling are performed to provide the same azimuth and range sampling interval for all sub-swaths, instead of the interpolation operation for the sub-swath image mosaic. Simulation results are provided to validate the proposed algorithm.
How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?
Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.
2013-12-01
The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters
Michel, D.
2015-10-20
The WACMOS-ET project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET) estimation. The data set was used to run 4 established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL), the Penman–Monteith algorithm from the MODIS evaporation product (PM-MOD), the Surface Energy Balance System (SEBS) and the Global Land Evaporation Amsterdam Model (GLEAM). In addition, in-situ meteorological data from 24 FLUXNET towers was used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed across several time scales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement to the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R^{2} = 0.67), the agreement of the satellite-based ET estimates is only marginally lower (R^{2} = 0.58). Results also show similar model performance at daily and sub-daily (3-hourly) resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs re-sampled to a common grid to facilitate global estimates) confirmed the original findings.
Directory of Open Access Journals (Sweden)
D. Michel
2015-10-01
Full Text Available The WACMOS-ET project has compiled a forcing data set covering the period 2005–2007 that aims to maximize the exploitation of European Earth Observations data sets for evapotranspiration (ET estimation. The data set was used to run 4 established ET algorithms: the Priestley–Taylor Jet Propulsion Laboratory model (PT-JPL, the Penman–Monteith algorithm from the MODIS evaporation product (PM-MOD, the Surface Energy Balance System (SEBS and the Global Land Evaporation Amsterdam Model (GLEAM. In addition, in-situ meteorological data from 24 FLUXNET towers was used to force the models, with results from both forcing sets compared to tower-based flux observations. Model performance was assessed across several time scales using both sub-daily and daily forcings. The PT-JPL model and GLEAM provide the best performance for both satellite- and tower-based forcing as well as for the considered temporal resolutions. Simulations using the PM-MOD were mostly underestimated, while the SEBS performance was characterized by a systematic overestimation. In general, all four algorithms produce the best results in wet and moderately wet climate regimes. In dry regimes, the correlation and the absolute agreement to the reference tower ET observations were consistently lower. While ET derived with in situ forcing data agrees best with the tower measurements (R2 = 0.67, the agreement of the satellite-based ET estimates is only marginally lower (R2 = 0.58. Results also show similar model performance at daily and sub-daily (3-hourly resolutions. Overall, our validation experiments against in situ measurements indicate that there is no single best-performing algorithm across all biome and forcing types. An extension of the evaluation to a larger selection of 85 towers (model inputs re-sampled to a common grid to facilitate global estimates confirmed the original findings.
Sayadi, Taraneh; Schmid, Peter J.
2016-10-01
Many fluid flows of engineering interest, though very complex in appearance, can be approximated by low-order models governed by a few modes, able to capture the dominant behavior (dynamics) of the system. This feature has fueled the development of various methodologies aimed at extracting dominant coherent structures from the flow. Some of the more general techniques are based on data-driven decompositions, most of which rely on performing a singular value decomposition (SVD) on a formulated snapshot (data) matrix. The amount of experimentally or numerically generated data expands as more detailed experimental measurements and increased computational resources become readily available. Consequently, the data matrix to be processed will consist of far more rows than columns, resulting in a so-called tall-and-skinny (TS) matrix. Ultimately, the SVD of such a TS data matrix can no longer be performed on a single processor, and parallel algorithms are necessary. The present study employs the parallel TSQR algorithm of (Demmel et al. in SIAM J Sci Comput 34(1):206-239, 2012), which is further used as a basis of the underlying parallel SVD. This algorithm is shown to scale well on machines with a large number of processors and, therefore, allows the decomposition of very large datasets. In addition, the simplicity of its implementation and the minimum required communication makes it suitable for integration in existing numerical solvers and data decomposition techniques. Examples that demonstrate the capabilities of highly parallel data decomposition algorithms include transitional processes in compressible boundary layers without and with induced flow separation.
Directory of Open Access Journals (Sweden)
S. Jess
2011-03-01
Full Text Available Cloud properties are usually assumed to be homogeneous within the cloudy part of the grid-box, i.e. subgrid-scale inhomogeneities in cloud cover and/or microphysical properties are often neglected. However, precipitation formation is initiated by large particles. Thus mean values are not representative and could lead to a delayed onset of precipitation.
For a more physical description of the subgrid-scale structure of clouds we introduce a new statistical sub-column algorithm to study the impact of cloud inhomogeneities on stratiform precipitation. Each model column is divided into N independent sub-columns with sub-boxes in each layer, which are completely clear or cloudy. The cloud cover is distributed over the sub-columns depending on the diagnosed cloud fraction. Mass and number concentrations of cloud droplets and ice crystals are distributed randomly over the cloudy sub-columns according to prescribed probability distributions. Shapes and standard deviations of the distributions are obtained from aircraft observations.
We have implemented this sub-column algorithm into the ECHAM5 global climate model to take subgrid variability of cloud cover and microphysical properties into account. Simulations with the Single Column Model version of ECHAM5 were carried out for one period of the Mixed-Phase Polar Arctic Cloud Experiment (MPACE campaign as well as for the Eastern Pacific Investigation of climate Processes (EPIC campaign. Results with the new algorithm show an earlier onset of precipitation for the EPIC campaign and a higher conversion of liquid to ice for the MPACE campaign, which reduces the liquid water path in better agreement with the observations than the original version of the ECHAM5 model.
Directory of Open Access Journals (Sweden)
Ye. V. Dmitriev
2006-01-01
Full Text Available On the basis of the developed device for protection against of ferro-resonant and high-frequency cumulative over-voltages an algorithm for obtaining a voltage imitating ferro-resonant over-voltages is proposed in the paper. This algorithm presupposes to apply a voltage to the secondary transformer side from an extraneous source is proposed.
Combining soft decision algorithms and scale-sequential hypotheses pruning for object recognition
Energy Technology Data Exchange (ETDEWEB)
Kumar, V.P.; Manolakos, E.S. [Northeastern Univ., Boston, MA (United States)
1996-12-31
This paper describes a system that exploits the synergy of Hierarchical Mixture Density (HMD) estimation with multiresolution decomposition based hypothesis pruning to perform efficiently joint segmentation and labeling of partially occluded objects in images. First we present the overall structure of the HMD estimation algorithm in the form of a recurrent neural network which generates the posterior probabilities of the various hypotheses associated with the image. Then in order to reduce the large memory and computation requirement we propose a hypothesis pruning scheme making use of the orthonormal discrete wavelet transform for dimensionality reduction. We provide an intuitive justification for the validity of this scheme and present experimental results and performance analysis on real and synthetic images to verify our claims.
Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.
Directory of Open Access Journals (Sweden)
Joshua I Glaser
Full Text Available Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1 relatively precise 3-dimensional brain imaging is possible; (2 the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3 a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
Directory of Open Access Journals (Sweden)
Jihoon Oh
2017-09-01
Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.
Institute of Scientific and Technical Information of China (English)
何智鹏; 许建中; 苑宾; 赵成勇; 彭茂兰
2015-01-01
子模块电容电压均衡控制策略是保证模块化多电平换流器(modular multilevel converter，MMC)正常运行的重要环节。对于每相桥臂含有大规模数量子模块的模块化多电平换流器高压直流输电(modular multilevel converter based high voltage direct current，MMC-HVDC)系统，减小排序复杂度对 MMC-HVDC 工程控制器设计难度以及硬件需求的降低具有重要意义。该文基于质因子分解法提出一种优化的混合排序法，通过引入希尔排序算法大幅度降低排序次数，从而降低仿真时间，降低了对系统硬件的要求。推导适用于MMC-HVDC系统的希尔排序步长的时间复杂度，给出基于混合排序法的排序次数计算公式，分析分组层数对系统降低频率的影响，得到分组层数与混合排序法优化效率成反比例关系的结论。最后在PSCAD/EMTDC中搭建两端401电平 MMC-HVDC 模型进行仿真，仿真结果验证了混合排序法及分组层数对优化效率影响的分析的有效性与正确性。%Equalization of sub-module voltage is the important guarantee for the normal operation of the modular multilevel converter (MMC). For the MMC based high voltage direct current (MMC-HVDC) with large amount of submodules per arm, it is of great significance to cut down the complexity of sorting process for the reduction of the design of controllers and hardware requirement of MMC-HVDC projects. An optimized mixed sorting method was presented based on the prime factorization method, the sort number was substantially reduced owing to the introduction of Shell sorting algorithm. Thus the sort time and the requirement for the hardware of the system are reduced. Time complexity of the Shell algorithm applied to the MMC-HVDC was deduced and the calculation formula was proposed for the sort number of the mixed sorting algorithm. Impacts on the reduction of the frequency by the grouping layers were analyzed. Conclusion that the
AN IMPROVED ALGORITHM OF GMM VOICE CONVERSION SYSTEM BASED ON CHANGING THE TIME-SCALE
Institute of Scientific and Technical Information of China (English)
Zhou Ying; Zhang Linghua
2011-01-01
This paper improves and presents an advanced method of the voice conversion system based on Gaussian Mixture Models (GMM) models by changing the time-scale of speech.The Speech Transformation and Representation using Adaptive Interpolation of weiGHTed spectrum (STRAIGHT) model is adopted to extract the spectrum features,and the GMM models are trained to generate the conversion function.The spectrum features of a source speech will be converted by the conversion function.The time-scale of speech is changed by extracting the converted features and adding to the spectrum.The conversion voice was evaluated by subjective and objective measurements.The results confirm that the transformed speech not only approximates the characteristics of the target speaker,but also more natural and more intelligible.
Institute of Scientific and Technical Information of China (English)
陶兴华; 李永东; 孙敏
2011-01-01
A power-feedback based control algorithm was developed to balance the DC-link voltages in a cascaded H-bridge（CHB） pulse width modulation（PWM） rectifier.The average mathematic model of the CHB rectifier is built.The relationship between the modulation index and the active power of cells is deduced by the mathematic analysis,from which the voltage balancing control method can be derived.Moreover,the feasibility of the proposed method is also discussed.Simulations on a four-cell CHB rectifier and tests on a two-cell prototype indicate that the CHB rectifier can achieve good performance on both DC-link voltage balancing and grid current regulation.%提出了一种基于功率反馈的直流母线电压平衡算法,用于对H桥级联型（cascaded H-bridge,CHB）脉冲宽度调制（PWM）整流器的电压控制。建立了CHB整流器基于平均意义下的数学模型;通过数学分析得到了各级整流器稳态工作时有功功率与调制比之间的关系,依此推出基于功率反馈的电压平衡控制算法,并对该算法的适用条件进行讨论;在MATLAB上进行了4级级联的系统仿真,搭建2级级联的实验样机进行实验研究。仿真与实验结果表明：该算法在满足适用范围的前提条件下,能够实现对多级CHB整流器直流母线电压的平衡控制,
Revealing small-scale diffracting discontinuities by an optimization inversion algorithm
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei
2017-02-01
Small-scale diffracting geologic discontinuities play a significant role in studying carbonate reservoirs. The seismic responses of them are coded in diffracted/scattered waves. However, compared with reflections, the energy of these valuable diffractions is generally one or even two orders of magnitude weaker. This means that the information of diffractions is strongly masked by reflections in the seismic images. Detecting the small-scale cavities and tiny faults from the deep carbonate reservoirs, mainly over 6 km, poses an even bigger challenge to seismic diffractions, as the signals of seismic surveyed data are weak and have a low signal-to-noise ratio (SNR). After analyzing the mechanism of the Kirchhoff migration method, the residual of prestack diffractions located in the neighborhood of the first Fresnel aperture is found to remain in the image space. Therefore, a strategy for extracting diffractions in the image space is proposed and a regularized L 2-norm model with a smooth constraint to the local slopes is suggested for predicting reflections. According to the focusing conditions of residual diffractions in the image space, two approaches are provided for extracting diffractions. Diffraction extraction can be directly accomplished by subtracting the predicted reflections from seismic imaging data if the residual diffractions are focused. Otherwise, a diffraction velocity analysis will be performed for refocusing residual diffractions. Two synthetic examples and one field application demonstrate the feasibility and efficiency of the two proposed methods in detecting the small-scale geologic scatterers, tiny faults and cavities.
Directory of Open Access Journals (Sweden)
Heng-Yi Su
2016-01-01
Full Text Available This paper proposes an optimal control scheme based on a synchronized phasor (synchrophasor for power system secondary voltage control. The framework covers voltage stability monitoring and control. Specifically, a voltage stability margin estimation algorithm is developed and built in the newly designed adaptive secondary voltage control (ASVC method to achieve more reliable and efficient voltage regulation in power systems. This new approach is applied to improve voltage profile across the entire power grid by an optimized plan for VAR (reactive power sources allocation; therefore, voltage stability margin of a power system can be increased to reduce the risk of voltage collapse. An extensive simulation study on the IEEE 30-bus test system is carried out to demonstrate the feasibility and effectiveness of the proposed scheme.
Voltage Control Scheme with Distributed Generation and Grid Connected Converter in a DC Microgrid
Directory of Open Access Journals (Sweden)
Jong-Chan Choi
2014-10-01
Full Text Available Direct Current (DC microgrids are expected to become larger due to the rapid growth of DC energy sources and power loads. As the scale of the system expends, the importance of voltage control will be increased to operate power systems stably. Many studies have been performed on voltage control methods in a DC microgrid, but most of them focused only on a small scale microgrid, such as a building microgrid. Therefore, a new control method is needed for a middle or large scale DC microgrid. This paper analyzes voltage drop problems in a large DC microgrid and proposes a cooperative voltage control scheme with a distributed generator (DG and a grid connected converter (GCC. For the voltage control with DGs, their location and capacity should be considered for economic operation in the systems. Accordingly, an optimal DG allocation algorithm is proposed to minimize the capacity of a DG for voltage control in DC microgrids. The proposed methods are verified with typical load types by a simulation using MATLAB and PSCAD/EMTDC.
大规模SVDD的坐标下降算法%Coordinate Descent Algorithms for Large-Scale SVDD
Institute of Scientific and Technical Information of China (English)
陶卿; 罗强; 朱烨雷; 储德军
2012-01-01
Support vector data description ( SVDD) is an unsupervised learning method with significant application in image recognition and information security. Coordinate descent is an effective method for large-scale classification problems with simple operation and high convergence speed. In this paper, an efficient coordinate descent algorithm for solving large-scale SVDD is presented. The solution of concerned sub-problem at each iteration is derived in closed form and the computational cost is decreased through the accelerating strategy and cheap computation. Meanwhile, three methods for selecting sub-problem, analyzing and comparing their advantage and disadvantage are developed. The experiments on simulation and real large-scale database validate the performance of the proposed algorithm. Compared with LibSVDD, the proposed algorithm has great superiority which takes less than 1.4 seconds to solve a text database from ijcnn with 105 training examples.%支持向量数据描述(SVDD)是一种无监督学习算法,在图像识别和信息安全等领域有重要应用.坐标下降方法是求解大规模分类问题的有效方法,具有简洁的操作流程和快速的收敛速率.文中针对大规模SVDD提出一种高效的对偶坐标下降算法,算法每步迭代的子问题都可获得解析解,并可使用加速策略和简便运算减少计算量.同时给出3种子问题的选择方法,并分析对比各自优劣.实验对仿真和真实大规模数据库进行算法验证.与LibSVDD相比,文中方法更具优势,1.4s求解105样本规模的ijcnn文本库.
Inductive voltage divider modeling in Matlab
Andreev, S. A.; Kim, V. L.
2017-01-01
Inductive voltage dividers have the most appropriate metrological characteristics on alternative current and are widely used for converting physical signals. The model of a double-decade inductive voltage divider was designed with the help of Matlab/Simulink. The first decade is an inductive voltage divider with balanced winding, the second decade is a single-stage inductive voltage divider. In the paper, a new transfer function algorithm was given. The study shows errors and differences that appeared between the third degree reduced model and a twenty degree unreduced model. The obtained results of amplitude error differ no more than by 7 % between the reduced and unreduced model.
Institute of Scientific and Technical Information of China (English)
Hossein AGHABABA; Behjat FOROUZANDEH; Ali AFZALI-KUSHA
2012-01-01
We propose a modeling methodology for both leakage power consumption and delay of basic CMOS digital gates in the presence of threshold voltage and mobility variations.The key parameters in determining the leakage and delay are OFF and ON currents,respectively,which are both affected by the variation of the threshold voltage.Additionally,the current is a strong function of mobility.The proposed methodology relies on a proper modeling of the threshold voltage and mobility variations,which may be induced by any source.Using this model,in the plane of threshold voltage and mobility,we determine regions for different combinations of performance (speed) and leakage.Based on these regions,we discuss the trade-offbetween leakage and delay where the leakage-delay-product is the optimization objective.To assess the accuracy of the proposed model,we compare its predictions with those of HSPICE simulations for both basic digital gates and ISCAS85 benchmark circuits in 45-,65-,and 90-nm technologies.
Directory of Open Access Journals (Sweden)
Supriya Aggarwal
2012-01-01
Full Text Available One of the most important steps in spectral analysis is filtering, where window functions are generally used to design filters. In this paper, we modify the existing architecture for realizing the window functions using CORDIC processor. Firstly, we modify the conventional CORDIC algorithm to reduce its latency and area. The proposed CORDIC algorithm is completely scale-free for the range of convergence that spans the entire coordinate space. Secondly, we realize the window functions using a single CORDIC processor as against two serially connected CORDIC processors in existing technique, thus optimizing it for area and latency. The linear CORDIC processor is replaced by a shift-add network which drastically reduces the number of pipelining stages required in the existing design. The proposed design on an average requires approximately 64% less pipeline stages and saves up to 44.2% area. Currently, the processor is designed to implement Blackman windowing architecture, which with slight modifications can be extended to other widow functions as well. The details of the proposed architecture are discussed in the paper.
Knudsen, Steven; Golubovic, Leonardo
Prospects to build Space Elevator (SE) systems have become realistic with ultra-strong materials such as carbon nano-tubes and diamond nano-threads. At cosmic length-scales, space elevators can be modeled as polymer like floppy strings of tethered mass beads. A new venue in SE science has emerged with the introduction of the Rotating Space Elevator (RSE) concept supported by novel algorithms discussed in this presentation. An RSE is a loopy string reaching into outer space. Unlike the classical geostationary SE concepts of Tsiolkovsky, Artsutanov, and Pearson, our RSE exhibits an internal rotation. Thanks to this, objects sliding along the RSE loop spontaneously oscillate between two turning points, one of which is close to the Earth whereas the other one is in outer space. The RSE concept thus solves a major problem in SE technology which is how to supply energy to the climbers moving along space elevator strings. The investigation of the classical and statistical mechanics of a floppy string interacting with objects sliding along it required development of subtle computational algorithms described in this presentation
Institute of Scientific and Technical Information of China (English)
赵宏博; 姚良忠; 王伟胜; 张文亮; 迟永宁; 李琰
2015-01-01
由于受扰系统在故障切除后恢复过程中出现高电压引起风电机组大规模脱网的事故近年来频发，因此基于近年来已发生的大规模风电机群脱网事故，从风电机组故障穿越期间动态无功控制策略和风电场附加无功补偿装置控制特性两个方面分析了受扰后电网发生高电压现象的主要原因，并通过现场测试验证了机组动态无功控制策略对机端电压的影响。在此基础上，提出了避免风电机组高电压脱网的协调预防控制策略，即风电机组在满足高电压穿越要求的前提下根据电压变化参与系统无功调节，风电场附加无功补偿装置根据并网点电压以及场内机组脱网情况实现快速调节和退出。最后，通过仿真验证了协调预防控制策略的有效性。%Based on recent years”frequent outage of large scale wind power by high voltage during voltage recovery period after grid fault,the high voltage phenomena after grid fault is analyzed from two aspects,one is the dynamic reactive power control strategy during low voltage ride through (LVRT) of wind turbines,the other is control characteristics of reactive power compensation devices in wind farms.It is verified by field test data on wind turbines that the dynamic voltage control strategy could affect terminal voltage of wind turbines.Based on the analysis,this paper presents a coordinated prevention and control strategy to avoid outage of large scale wind power resulting from high voltage.The wind turbines which fulfill the high voltage ride through(HVRT) requirements would regulate the reactive power of power system according to the fluctuation of wind turbine terminal voltage.The reactive power compensation devices in wind farms would realize fast regulation and outage according to the fluctuation of system voltage and the outage saturation of wind turbines in wind farms. Finally, the coordinated prevention and control strategy is validated
Walker, Joel W.
2014-08-01
The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.
Heuristic algorithm for determination of local properties of scale-free networks
Mitrovic, M
2006-01-01
Complex networks are everywhere. Many phenomena in nature can be modeled as networks: - brain structures - protein-protein interaction networks - social interactions - the Internet and WWW. They can be represented in terms of nodes and edges connecting them. Important characteristics: - these networks are not random; they have a structured architecture. Structure of different networks are similar: - all have power law degree distribution (scale-free property) - despite large size there is usually relatively short path between any two nodes (small world property). Global characteristics: - degree distribution, clustering coefficient and the diameter. Local structure: - frequency of subgraphs of given type (subgraph of order k is a part of the network consisting of k nodes and edges between them). There are different types of subgraphs of the same order.
Near-Threshold Computing and Minimum Supply Voltage of Single-Rail MCML Circuits
Directory of Open Access Journals (Sweden)
Ruiping Cao
2014-01-01
Full Text Available In high-speed applications, MOS current mode logic (MCML is a good alternative. Scaling down supply voltage of the MCML circuits can achieve low power-delay product (PDP. However, the current almost all MCML circuits are realized with dual-rail scheme, where the NMOS configuration in series limits the minimum supply voltage. In this paper, single-rail MCML (SRMCML circuits are described, which can avoid the devices configuration in series, since their logic evaluation block can be realized by only using MOS devices in parallel. The relationship between the minimum supply voltage of the SRMCML circuits and the model parameters of MOS transistors is derived, so that the minimum supply voltage can be estimated before circuit designs. An MCML dynamic flop-flop based on SRMCML is also proposed. The optimization algorithm for near-threshold sequential circuits is presented. A near-threshold SRMCML mode-10 counter based on the optimization algorithm is verified. Scaling down the supply voltage of the SRMCML circuits is also investigated. The power dissipation, delay, and power-delay products of these circuits are carried out. The results show that the near-threshold SRMCML circuits can obtain low delay and small power-delay product.
1976-02-01
VW- IKft, 1/4 H4 -Wv- IK!1, I/4W INTERNAL VOLTAGE NOTE ALL TRANSISTORS ARE 2N43A OR EQUIVALENT GERMANIUM ALLOY PNP AA ALKALINE BATTERY...D-,, regardless of polarity. This signal is then full-wave rectified by the diode-connected Germanium transistor bridge, T,, T-,, T3, and T4... Transistor T5 acts as a second current limiter. Resistor R2 was selected to give 90 f# of full-scale meter deflection with an input signal of 115 volts
Institute of Scientific and Technical Information of China (English)
傅军栋; 喻勇; 刘晶
2014-01-01
The unbalanced three-phase load has negative effects on the power supply and the electrical appliances like transformers, resulting in poor low-voltage grid reliability and stability with higher line loss. Combined with engineering examples, this paper adopts the use of genetic algorithm in Matlab environment, establishing an opti-mal load distribution scheme to keep the three-phase unbalanced circuit close to the three-phase equilibrium state. It finds out that the system can be more economical in running state as the zero-sequence current decreases and distribution network losses are reduced.%配电网三相负荷不平衡对供电电网以及变压器等电器都会造成不利的影响，致使低压电网的可靠性和稳定性差，线损增高。结合工程实例，利用遗传算法在Matlab环境下求解出一个最优的负荷分配方案，从而使三相不平衡电路尽量接近三相平衡状态，减小零序电流，减少配网损耗，使系统能够在更加经济的状态下运行，达到节能效果。
Institute of Scientific and Technical Information of China (English)
郭小磊
2014-01-01
The reliability of distribution network is an important factor of many power supply business. A fast section algorithm of reliability evaluation for complex medium voltage electrical distribution networks is analyzed and improved. The partition method of the fault types is given based on the failure mode and effect analysis, and the principles of the construction of adjacency matrix in the network division are adopted. Besides, the methods of calculation in the reliability parameter equivalent and the construction of simplified model of distribution network are also put forward.%配电网可靠性是各省市供电企业的重点考核指标。对复杂中压配电网的可靠性评估分块算法进行了技术分析和完善，在故障后果模式分析法的基础上，给出了具体的故障类型划分思路，提出了利用邻接矩阵算法进行网络分块的指导原则，并对可靠性参数等效计算和简化配网模型的形成进行了简要分析，给出了具体简化配网模型的形成方法。
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The ...
LEIR RF Voltage Calibration using Phase Space Tomography
Hancock, S; Findlay, A
2010-01-01
The influence on convergence of the rf voltage input into the iterative algorithm of the Tomoscope has been used to confirm that the voltage calibration used in the digital cavity servo at LEIR is valid to better than 10%. Under the right conditions, this novel beam-based determination of rf voltage using tomography can be extraordinarily precise.
Cavity Voltage Phase Modulation MD
Mastoridis, Themistoklis; Molendijk, John; Timko, Helga; CERN. Geneva. ATS Department
2016-01-01
The LHC RF/LLRF system is currently configured for extremely stable RF voltage to minimize transient beam loading effects. The present scheme cannot be extended beyond nominal beam current since the demanded power would exceed the peak klystron power and lead to saturation. A new scheme has therefore been proposed: for beam currents above nominal (and possibly earlier), the cavity phase modulation by the beam will not be corrected (transient beam loading), but the strong RF feedback and One-Turn Delay feedback will still be active for loop and beam stability in physics. To achieve this, the voltage set point will be adapted for each bunch. The goal of this MD was to test a new algorithm that would adjust the voltage set point to achieve the cavity phase modulation that would minimize klystron forward power.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
Voltage Unbalance Compensation with Smart Three-phase Loads
DEFF Research Database (Denmark)
Douglass, Philip; Trintis, Ionut; Munk-Nielsen, Stig
2016-01-01
This paper describes the design, proof-of-concept simulations and laboratory test of an algorithm for controlling active front-end rectifiers to reduce voltage unbalance. Using inputs of RMS voltage, the rectifier controller allocates load unevenly on its 3 phases to compensate for voltage unbala...... is caused by asymmetrical loads. These results suggest that the optimal algorithm to reduce system unbalance depends on which system parameter is most important: phase-neutral voltage unbalance, phase-phase voltage unbalance, or current unbalance....
Stochastic characterization of small-scale algorithms for human sensory processing.
Neri, Peter
2010-12-01
Human sensory processing can be viewed as a functional H mapping a stimulus vector s into a decisional variable r. We currently have no direct access to r; rather, the human makes a decision based on r in order to drive subsequent behavior. It is this (typically binary) decision that we can measure. For example, there may be two external stimuli s([0]) and s([1]), mapped onto r([0]) and r([1]) by the sensory apparatus H; the human chooses the stimulus associated with largest r. This kind of decisional transduction poses a major challenge for an accurate characterization of H. In this article, we explore a specific approach based on a behavioral variant of reverse correlation techniques, where the input s contains a target signal corrupted by a controlled noisy perturbation. The presence of the target signal poses an additional challenge because it distorts the otherwise unbiased nature of the noise source. We consider issues arising from both the decisional transducer and the target signal, their impact on system identification, and ways to handle them effectively for system characterizations that extend to second-order functional approximations with associated small-scale cascade models.
Stop-and-Stare: Optimal Sampling Algorithms for Viral Marketing in Billion-scale Networks
Nguyen, Hung T; Dinh, Thang N
2016-01-01
Influence Maximization (IM), that seeks a small set of key users who spread the influence widely into the network, is a core problem in multiple domains. It finds applications in viral marketing, epidemic control, and assessing cascading failures within complex systems. Despite the huge amount of effort, IM in billion-scale networks such as Facebook, Twitter, and World Wide Web has not been satisfactorily solved. Even the state-of-the-art methods such as TIM+ and IMM may take days on those networks. In this paper, we propose SSA and D-SSA, two novel sampling frameworks for IM-based viral marketing problems. SSA and D-SSA are up to 1200 times faster than the SIGMOD 15 best method, IMM, while providing the same $(1- 1/e-\\epsilon)$ approximation guarantee. Underlying our frameworks is an innovative Stop-and-Stare strategy in which they stop at exponential check points to verify (stare) if there is adequate statistical evidence on the solution quality. Theoretically, we prove that SSA and D-SSA are the first appr...
Evaluating machine learning algorithms estimating tremor severity ratings on the Bain-Findley scale
Yohanandan, Shivanthan A. C.; Jones, Mary; Peppard, Richard; Tan, Joy L.; McDermott, Hugh J.; Perera, Thushara
2016-12-01
Tremor is a debilitating symptom of some movement disorders. Effective treatment, such as deep brain stimulation (DBS), is contingent upon frequent clinical assessments using instruments such as the Bain-Findley tremor rating scale (BTRS). Many patients, however, do not have access to frequent clinical assessments. Wearable devices have been developed to provide patients with access to frequent objective assessments outside the clinic via telemedicine. Nevertheless, the information they report is not in the form of BTRS ratings. One way to transform this information into BTRS ratings is through linear regression models (LRMs). Another, potentially more accurate method is through machine learning classifiers (MLCs). This study aims to compare MLCs and LRMs, and identify the most accurate model that can transform objective tremor information into tremor severity ratings on the BTRS. Nine participants with upper limb tremor had their DBS stimulation amplitude varied while they performed clinical upper-extremity exercises. Tremor features were acquired using the tremor biomechanics analysis laboratory (TREMBAL). Movement disorder specialists rated tremor severity on the BTRS from video recordings. Seven MLCs and 6 LRMs transformed TREMBAL features into tremor severity ratings on the BTRS using the specialists’ ratings as training data. The weighted Cohen’s kappa ({κ\\text{w}} ) defined the models’ rating accuracy. This study shows that the Random Forest MLC was the most accurate model ({κ\\text{w}} = 0.81) at transforming tremor information into BTRS ratings, thereby improving the clinical interpretation of tremor information obtained from wearable devices.
Directory of Open Access Journals (Sweden)
Othman M. K. Alsmadi
2015-01-01
Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Energy Technology Data Exchange (ETDEWEB)
Alexewicz, A., E-mail: alexander.alexewicz@tuwien.ac.at [Vienna University of Technology, Floragasse 7, 1040 Vienna (Austria); Ostermaier, C.; Henkel, C.; Bethge, O. [Vienna University of Technology, Floragasse 7, 1040 Vienna (Austria); Carlin, J.-F.; Lugani, L.; Grandjean, N. [Ecole Polytechnique Federale de Lausanne, Station 3, 1015 Lausanne (Switzerland); Bertagnolli, E.; Pogany, D.; Strasser, G. [Vienna University of Technology, Floragasse 7, 1040 Vienna (Austria)
2012-07-31
We present enhancement-mode GaN high electron mobility transistors on Si substrates with ZrO{sub 2} gate dielectrics of thicknesses t{sub ox} between 10 and 24 nm. The oxide interlayers between the InAlN/AlN barrier and gate metal allow raising the device threshold voltage up to + 2.3 V and reduce gate leakage current to less than 100 nA/mm with a high drain current on/off ratio of 4 orders of magnitude. We use a model that explains the observed linear dependence of the threshold voltage on t{sub ox} and allows determining fixed charges at the oxide/barrier interface. - Highlights: Black-Right-Pointing-Pointer Enhancement-mode InAlN/AlN-GaN high electron mobility transistor (HEMT) Black-Right-Pointing-Pointer Metal oxide semiconductor HEMT with ZrO{sub 2} gate oxide Black-Right-Pointing-Pointer Linear decrease of threshold voltage with increasing gate oxide thickness Black-Right-Pointing-Pointer A model explaining that dependence is presented. Black-Right-Pointing-Pointer This model allows determining fixed charges at the InAlN/ZrO{sub 2} interface.
Energy Technology Data Exchange (ETDEWEB)
Yamin, H.Y. [Yarmouk Univ., Irbid (Jordan). Dept. of Power Engineering; Shahidehpour, S.M. [Illinois Inst. of Technology, Chicago, IL (United States). Dept. of Electrical and Computer Engineering
2003-12-01
This paper describes a generalized active/reactive iterative coordination process between GENCOs and the Independent System Operator (ISO) for active (transmission congestion) and reactive (voltage profile) management in the day-ahead market. GENCOs apply priced-based unit commitment without transmission and voltage security constraints, schedule their units and submit their initial bids to the ISO. The ISO executes congestion and voltage profile management for eliminating transmission and voltage profile violations. If violations are not eliminated, the ISO minimizes the transmission and voltage profile violations and sends a signal via the Internet to GENCOs. GENCOs reschedule their units taking into account the ISO signals and submit modified bids to the ISO. The voltage problem is addressed and a linear model is formulated and used in the proposed method. The voltage problem is formulated as a linear programming with a block-angular structure and Dantzig-Wolfe decomposition is applied to generate several smaller problems for a faster and easier solution of large-scale power systems. Two 36 unit GENCOs are used to demonstrate the performance of the proposed generalized active/reactive coordination algorithm. (author)
L{sub p} norm approaches for estimating voltage flicker
Energy Technology Data Exchange (ETDEWEB)
Inan, Aslan [Department of Electrical Engineering, Faculty of Electrical-Electronics, Yildiz Technical University, Istanbul (Turkey); Bakroun, Maher [Antrim Crescent, Toronto, Ontario (Canada); Heydt, Gerald T. [Fulton School of Engineering, Arizona State University, Tempe, AZ (United States)
2010-12-15
It is important to accurately estimate instantaneous voltage flicker magnitudes and frequencies in order to correctly evaluate voltage fluctuations. Voltage flicker is a problem in electric power quality. Different approaches used to determine the magnitude of the voltage flicker have been presented: measurement methods generally use a flickermeter device. Simulation methods require a computer model of the disturbing load and the flickermeter. Calculation methods necessitate a simplified empirical formula. Estimation algorithms are based on the estimation of the voltage flicker components. In this paper, two models of voltage flicker are discussed: L{sub p} estimation algorithms utilizing L{sub 1}, L{sub 2} and L{sub {infinity}} norms are used to estimate the voltage magnitudes of the flicker signals as well as the fundamental voltage magnitude. The main result is that it is possible to design an L{sub p} estimator to identify flicker frequency and amplitude from time series measurements. (author)
New Control Technique Applied in Dynamic Voltage Restorer for Voltage Sag Mitigation
Directory of Open Access Journals (Sweden)
Rosli Omar
2010-01-01
Full Text Available The Dynamic Voltage Restorer (DVR was a power electronics device that was able to compensate voltage sags on critical loads dynamically. The DVR consists of VSC, injection transformers, passive filters and energy storage (lead acid battery. By injecting an appropriate voltage, the DVR restores a voltage waveform and ensures constant load voltage. There were so many types of the control techniques being used in DVR for mitigating voltage sags. The efficiency of the DVR depends on the efficiency of the control technique involved in switching the inverter. Problem statement: Simulation and experimental investigation toward new algorithms development based on SVPWM. Understanding the nature of DVR and performance comparisons between the various controller technologies available. The proposed controller using space vector modulation techniques obtain higher amplitude modulation indexes if compared with conventional SPWM techniques. Moreover, space vector modulation techniques can be easily implemented using digital processors. Space vector PWM can produce about 15% higher output voltage than standard Sinusoidal PWM. Approach: The purpose of this research was to study the implementation of SVPWM in DVR. The proposed control algorithm was investigated through computer simulation by using PSCAD/EMTDC software. Results: From simulation and experimental results showed the effectiveness and efficiency of the proposed controller based on SVPWM in mitigating voltage sags in low voltage distribution systems. It was concluded that its controller also works well both in balance and unbalance conditions of voltages. Conclusion/Recommendations: The simulation and experimental results of a DVR using PSCAD/EMTDC software based on SVPWM technique showed clearly the performance of the DVR in mitigating voltage sags. The DVR operates without any difficulties to inject the appropriate voltage component to correct rapidly any anomaly in the supply voltage to keep the
Moussa, Jonathan E
2014-01-07
The random-phase approximation with second-order screened exchange (RPA+SOSEX) is a model of electron correlation energy with two caveats: its accuracy depends on an arbitrary choice of mean field, and it scales as O(n(5)) operations and O(n(3)) memory for n electrons. We derive a new algorithm that reduces its scaling to O(n(3)) operations and O(n(2)) memory using controlled approximations and a new self-consistent field that approximates Brueckner coupled-cluster doubles theory with RPA+SOSEX, referred to as Brueckner RPA theory. The algorithm comparably reduces the scaling of second-order Mo̸ller-Plesset perturbation theory with smaller cost prefactors than RPA+SOSEX. Within a semiempirical model, we study H2 dissociation to test accuracy and Hn rings to verify scaling.
Energy Technology Data Exchange (ETDEWEB)
Moussa, Jonathan E., E-mail: godotalgorithm@gmail.com [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)
2014-01-07
The random-phase approximation with second-order screened exchange (RPA+SOSEX) is a model of electron correlation energy with two caveats: its accuracy depends on an arbitrary choice of mean field, and it scales as O(n{sup 5}) operations and O(n{sup 3}) memory for n electrons. We derive a new algorithm that reduces its scaling to O(n{sup 3}) operations and O(n{sup 2}) memory using controlled approximations and a new self-consistent field that approximates Brueckner coupled-cluster doubles theory with RPA+SOSEX, referred to as Brueckner RPA theory. The algorithm comparably reduces the scaling of second-order Møller-Plesset perturbation theory with smaller cost prefactors than RPA+SOSEX. Within a semiempirical model, we study H{sub 2} dissociation to test accuracy and H{sub n} rings to verify scaling.
Voltage scheduling for low power/energy
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned
Walker, Joel W
2014-01-01
The MT2 or "s-transverse mass", statistic was developed to cope with the difficulty of associating a parent mass scale with a missing transverse energy signature, given that models of new physics generally predict production of escaping particles in pairs, while collider experiments are sensitive to just a single vector sum over all sources of missing transverse momentum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the MT2 event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's fully-featured selection cut software package AEACuS (Algorithmic Event Arbiter and Cut Selector).
Directory of Open Access Journals (Sweden)
Sepehr Sadighi
2015-07-01
Full Text Available In this paper, a hybrid model for estimating the activity of a commercial Pt-Re/Al2O3 catalyst in an industrial scale heavy naphtha catalytic-reforming unit (CRU is presented. This model is also capable of predicting research octane number (RON and yield of gasoline. In the proposed model, called DANN, the decay function of heterogeneous catalysts is combined with a recurrent-layer artificial neural network. During a life cycle (919 days, fifty-eight points are selected for building and training the DANN (60%, nineteen data points for testing (20%, and the remained ones for validating steps. Results show that DANN can acceptably estimate the activity of catalyst during its life in consideration of all process variables. Moreover, it is confirmed that the proposed model is capable of predicting RON and yield of gasoline for unseen (validating data with AAD% (average absolute deviation of 0.272% and 0.755%, respectively. After validating the model, the octane barrel level (OCB of the plant is maximized by manipulating the inlet temperature of reactors, and hydrogen to hydrocarbon molar ratio whilst all process limitations are taken into account. During a complete life cycle results show that the decision variables, generated by the optimization program, can increase the RON, process yield and OCB of CRU to about 1.15%, 3.21%, and 4.56%, respectively. © 2015 BCREC UNDIP. All rights reserved.Received: 27th July 2014; Revised: 31st May 2015; Accepted: 31th May 2015 How to Cite: Sadighi, S., Mohaddecy, R.S., Norouzian, A. (2015. Optimizing an Industrial Scale Naphtha Catalytic Reforming Plant Using a Hybrid Artificial Neural Network and Genetic Algorithm Technique. Bulletin of Chemical Reaction Engineering & Catalysis, 10(2: 210-220. (doi:10.9767/bcrec.10.2.7171.210-220 Permalink/DOI: http://dx.doi.org/10.9767/bcrec.10.2.7171.210-220
3D Object Visual Tracking for the 220 kV/330 kV High-Voltage Live-Line Insulator Cleaning Robot
Institute of Scientific and Technical Information of China (English)
ZHANG Jian; YANG Ru-qing
2009-01-01
The 3D object visual tracking problem is studied for the robot vision system of the 220 kV/330 kV high-voltage live-line insulator cleaning robot. The SUSAN Edge based Scale Invariant Feature (SESIF) algorithm based 3D objects visual tracking is achieved in three stages: the first frame stage, tracking stage, and recovering stage. An SESIF based objects recognition algorithm is proposed to fred initial location at both the first frame stage and recovering stage. An SESIF and Lie group based visual tracking algorithm is used to track 3D object. Experiments verify the algorithm's robustness. This algorithm will be used in the second generation of the 220 kV/330 kV high-voltage five-line insulator cleaning robot.
Zhou, Mingxing; Liu, Jing
2017-02-01
Designing robust networks has attracted increasing attentions in recent years. Most existing work focuses on improving the robustness of networks against a specific type of attacks. However, networks which are robust against one type of attacks may not be robust against another type of attacks. In the real-world situations, different types of attacks may happen simultaneously. Therefore, we use the Pearson's correlation coefficient to analyze the correlation between different types of attacks, model the robustness measures against different types of attacks which are negatively correlated as objectives, and model the problem of optimizing the robustness of networks against multiple malicious attacks as a multiobjective optimization problem. Furthermore, to effectively solve this problem, we propose a two-phase multiobjective evolutionary algorithm, labeled as MOEA-RSFMMA. In MOEA-RSFMMA, a single-objective sampling phase is first used to generate a good initial population for the later two-objective optimization phase. Such a two-phase optimizing pattern well balances the computational cost of the two objectives and improves the search efficiency. In the experiments, both synthetic scale-free networks and real-world networks are used to validate the performance of MOEA-RSFMMA. Moreover, both local and global characteristics of networks in different parts of the obtained Pareto fronts are studied. The results show that the networks in different parts of Pareto fronts reflect different properties, and provide various choices for decision makers.
Directory of Open Access Journals (Sweden)
Tarek H. M. Abou-El-Enien
2015-04-01
Full Text Available This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution method for solving Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand side of the constraints (TL-LSLMOP-SPrhs of block angular structure. In order to obtain a compromise ( satisfactory solution to the (TL-LSLMOP-SPrhs of block angular structure using the proposed TOPSIS method, a modified formulas for the distance function from the positive ideal solution (PIS and the distance function from the negative ideal solution (NIS are proposed and modeled to include all the objective functions of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective programming problem is obtained by using the max-min operator for the second –order compromise operaion. A decomposition algorithm for generating a compromise ( satisfactory solution through TOPSIS approach is provided where the first level decision maker (FLDM is asked to specify the relative importance of the objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.
Anderson, Eric C
2012-11-08
Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.
Semisupervised Community Detection by Voltage Drops
Directory of Open Access Journals (Sweden)
Min Ji
2016-01-01
Full Text Available Many applications show that semisupervised community detection is one of the important topics and has attracted considerable attention in the study of complex network. In this paper, based on notion of voltage drops and discrete potential theory, a simple and fast semisupervised community detection algorithm is proposed. The label propagation through discrete potential transmission is accomplished by using voltage drops. The complexity of the proposal is OV+E for the sparse network with V vertices and E edges. The obtained voltage value of a vertex can be reflected clearly in the relationship between the vertex and community. The experimental results on four real networks and three benchmarks indicate that the proposed algorithm is effective and flexible. Furthermore, this algorithm is easily applied to graph-based machine learning methods.
Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor)
2002-01-01
A voltage transient recorder can detect lightning induced transient voltages. The recorder detects a lightning induced transient voltage and adjusts input amplifiers to accurately record transient voltage magnitudes. The recorder stores voltage data from numerous monitored channels, or devices. The data is time stamped and can be output in real time, or stored for later retrieval. The transient recorder, in one embodiment, includes an analog-to-digital converter and a voltage threshold detector. When an input voltage exceeds a pre-determined voltage threshold, the recorder stores the incoming voltage magnitude and time of arrival. The recorder also determines if its input amplifier circuits clip the incoming signal or if the incoming signal is too low. If the input data is clipped or too low, the recorder adjusts the gain of the amplifier circuits to accurately acquire subsequent components of the lightning induced transients.
Nocita, M.; Stevens, A.; Toth, G.; van Wesemael, B.; Montanarella, L.
2012-12-01
under grassland, with a root mean square error (RMSE) of 3.6 and 7.2 g C kg-1 respectively, while mineral soils under woodland and organic soils predictions were less accurate (RMSE of 11.9 and 51.1 g C kg-1). The RMSE was lower (except for organic soils) when sand content was used as covariate in the selection of the l-PLS predicting neighbours. The obtained results proved that: (i) Although the enormous spatial variability of European soils, the developed modified l-PLS algorithm was able to produce stable calibrations and accurate predictions. (ii) It is essential to invest in spectral libraries built according to sampling strategies, based on soil types, and a standardized laboratory protocol. (iii) Vis-NIR DRS spectroscopy is a powerful and cost effective tool to predict SOC content at regional/continental scales, and should be converted from a pure research discipline into a reference operational method decreasing the uncertainties of SOC monitoring and terrestrial ecosystems carbon fluxes at all scales.
Institute of Scientific and Technical Information of China (English)
葛虎; 毕锐; 徐志成; 丁明; 任轲轲
2014-01-01
To satisfy reactive power and voltage control requirements of grid-connectedphotovoltaic power station, setting the qualified voltage and power factor at the PCC as the optimal control objective, strategies based on nine-zone diagram for reactive power and voltage control are proposed to PQ-type and PV-type photovoltaic power stations. Equivalent models of PQ-type and PV-type photovoltaic power stations are built and the flowcharts for reactive power and voltage control strategies are presented. Based on the typical dynamic changes of photovoltaic power station output and load, a 110 kV system with large-scale photovoltaic power station for reactive power and voltage control is given and the results of simulation show that the proposed strategies are effective and practical. This work is supported by National High-tech R & D Program of China (863 Program) (No. 2011AA05A107).%为了满足光伏电站并网对公共连接点（Point of common coupling, PCC）无功电压控制要求，基于九区图原理，以PCC电压和功率因数均合格为最优控制目标，针对PQ电源型和PV电源型的大型光伏电站提出了的无功电压控制策略。搭建了PQ电源型和PV电源型大型光伏电站的等效模型，给出光伏电站无功电压控制策略实施流程图。以典型光伏电站出力和负荷动态变化为基础，通过搭建一个含大型并网光伏电站的110 kV系统，对光伏电站的无功电压控制进行仿真。仿真结果验证了所提策略的有效性和实用性。
An improved Algorithm for Harris Multi-scale Corner Detection%一种改进的Harris多尺度角点检测算法
Institute of Scientific and Technical Information of China (English)
温文雅
2012-01-01
图像特征点的提取是实现图像特征匹配的重要步骤。针对Harris角点算法的受尺度变化影响大,阈值为人为给定的缺点,把图像尺度空间的思想与自适应阈值的方法相结合,提出了改进的多尺度Harris角点检测方法。实验结果表明,该算法提取到的图像角点不仅精确度高,而且检测到的伪角点少。%The image feature point extraction is an important step for the realization of the image feature matching. To solve the problems of single scale and the given threshold of the Harris algorithm, the multi-scale idea and auto-adaptive threshold method are introduced into the Harris algorithm in this paper. An improved algorithm for Harris multi-scale corner detection is proposed in the paper. Experimental results show that the algorithm in the paper can detect comer with high accuracy and less pseudo-corner.
Shunt PWM advanced var compensators based on voltage source inverters for Facts applications
Energy Technology Data Exchange (ETDEWEB)
Barbosa, Pedro G.; Misaka, Isamu; Watanabe, Edson H. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia
1994-12-31
Increased attention has been given to improving power system operation. This paper presents modeling, analysis and design of reactive shunt power compensators based on PWM-Voltage Source Inverters (Pulse Width Modulation -Voltage Source Inverters). (Pulse Width Modulation - Voltage Source Inverters). The control algorithm is based on new concepts of instantaneous active and reactive power theory. The objective is to show that with a small capacitor in the side of a 3-phase PWM-VSI it is possible to synthesize a variable reactive (capacitive or inductive) device. Design procedures and experimental results are presented. The feasibility of this method was verified by digital simulations and measurements on a small scale model. (author) 9 refs., 12 figs.
Experimental validation of a high voltage pulse measurement method.
Energy Technology Data Exchange (ETDEWEB)
Cular, Stefan; Patel, Nishant Bhupendra; Branch, Darren W.
2013-09-01
This report describes X-cut lithium niobates (LiNbO3) utilization for voltage sensing by monitoring the acoustic wave propagation changes through LiNbO3 resulting from applied voltage. Direct current (DC), alternating current (AC) and pulsed voltage signals were applied to the crystal. Voltage induced shift in acoustic wave propagation time scaled quadratically for DC and AC voltages and linearly for pulsed voltages. The measured values ranged from 10 - 273 ps and 189 ps 2 ns for DC and non-DC voltages, respectively. Data suggests LiNbO3 has a frequency sensitive response to voltage. If voltage source error is eliminated through physical modeling from the uncertainty budget, the sensors U95 estimated combined uncertainty could decrease to ~0.025% for DC, AC, and pulsed voltage measurements.
Chern, J.; Tao, W.; Mohr, K. I.; Matsui, T.; Lang, S. E.
2013-12-01
With recent rapid advancement in computational technology, the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM has been developed and improved at NASA Goddard. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the Goddard GEOS global model. In recent years, a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. These schemes have been incorporated into the MMF. The MMF has global coverage and can provide detailed cloud properties such as cloud amount, hydrometeors types, and vertical profile of water contents at high spatial and temporal resolution of a cloud-resolving model. When coupled with the Goddard Satellite Data Simulation Unit (GSDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators, the MMF system can provide radiances and backscattering similar to what satellite directly observed. In this study, one-year (2007) MMF simulation has been performed with the new 4-ice (cloud ice, snow, graupel and hail) microphysical scheme. The GEOS global model is run at 2o x 2.5o resolution and the embedded two-dimensional GCEs each has 64 columns at 4 km horizontal resolution. The large-scale forcing from the GCM is nudged to EC-Interim analysis to reduce the influence of MMF model biases on the cloud-resolving model results. The simulation provides more than 300 millions of vertical profiles of cloud dataset in different season, geographic locations, and climate regimes. This cloud dataset is used to supplement observations over data sparse areas for supporting GPM algorithm development. The model simulated mean and variability of surface rainfall and snowfall, cloud and precipitation types, cloud properties, radiances and backscattering are evaluated against satellite observations. We will assess the strengths
Automatic voltage imbalance detector
Bobbett, Ronald E.; McCormick, J. Byron; Kerwin, William J.
1984-01-01
A device for indicating and preventing damage to voltage cells such as galvanic cells and fuel cells connected in series by detecting sequential voltages and comparing these voltages to adjacent voltage cells. The device is implemented by using operational amplifiers and switching circuitry is provided by transistors. The device can be utilized in battery powered electric vehicles to prevent galvanic cell damage and also in series connected fuel cells to prevent fuel cell damage.
Measurement of a power system nominal voltage, frequency and voltage flicker parameters
Energy Technology Data Exchange (ETDEWEB)
Alkandari, A.M. [College of Technological Studies, Electrical Engineering Technology Department, Shwiekh (Kuwait); Soliman, S.A. [Electrical Power and Machines Department, Misr University for Science and Technology, Cairo (Egypt)
2009-09-15
We present, in this paper, an approach for identifying the frequency and amplitude of voltage flicker signal that imposed on the nominal voltage signal, as well as the amplitude and frequency of the nominal signal itself. The proposed algorithm performs the estimation in two steps; in the first step the original voltage signal is shifted forward and backward by an integer number of sample, one sample in this paper. The new generated signals from such a shift together with the original one is used to estimate the amplitude of the original signal voltage that composed of the nominal voltage and flicker voltage. The average of this amplitude gives the amplitude of the nominal voltage; this amplitude is subtracted from the original identified signal amplitude to obtain the samples of the flicker voltage. In the second step, the argument of the signal is calculated by simply dividing the magnitude of signal sample with the estimated amplitude in the first step. Calculating the arccosine of the argument, the frequency of the nominal signal as well as the phase angle can be computing using the least error square estimation algorithm. Simulation examples are given within the text to show the features of the proposed approach. (author)
Voltage stability and controllability indices for multimachine power systems
Energy Technology Data Exchange (ETDEWEB)
Vournas, C.D. [National Technical Univ., Athens (Greece). Electrical Energy Systems Lab.
1995-08-01
This paper presents a voltage stability index (VSI) and a voltage controllability index (VCI), related to the eigenvalues of mxm matrices in a multimachine power system made up of m synchronous generators and a number of infinite buses. System loads can have an arbitrary voltage sensitivity described by generalized voltage exponents. These nonlinear loads are linearized around an operating point and incorporated into a modified admittance matrix, which is subsequently reduced to the generator terminals using an efficient algorithm and sparsity techniques. The indices proposed are tested in a practical system and it is demonstrated that they provide a timely warning during a sequence of events leading to voltage collapse.
Institute of Scientific and Technical Information of China (English)
李阳洋; 方覃绍阳; 李子君
2016-01-01
Combining reactive power optimization of the power distribution network with voltage control on the inverter and based on improved cuckoo search (CS)algorithm,this paper proposes a reactive power voltage control method for the power distribution network with photovoltaic power generation (PV)which is able to reduce network loss and improve electric en-ergy quality. It also analyzes influence of PV grid-connection on voltage of the power distribution network. Considering re-active power output capacity of the PV inverter and on the basis of reactive power optimization with the goal of minimum network loss,the optimal distribution of node voltage is seek for implementing voltage control on the PV inverter. Inspired by group information sharing and individual experience of particle swarm algorithm,the basic CS algorithm is improved for strengthening global convergence capacity. Taking the modified IEEE 33 node system for an example and making simulation on PSCAD,it draws a conclusion that this reactive power voltage control method can reduce network loss and improve elec-tric energy quality under the condition of no increase of extra investment.%将配电网无功优化与逆变器的电压控制相结合,基于改进的布谷鸟搜索(cuckoo search,CS)算法,提出了既能降低网损又能提高电能质量的含光伏发电(photovoltaic power generation,PV)配电网无功电压控制方法。分析了 PV并网对配电网电压的影响；考虑 PV 逆变器的无功输出能力,基于以网损最小为目标的无功优化,求取节点电压最优分布,对PV逆变器实施电压控制；受粒子群算法群体信息共享和个体经验总结思想的启发,改进了基本CS算法,使全局收敛能力更强。以修改的 IEEE 33节点系统为算例,并在 PSCAD 上仿真,结果表明,在不增加额外投资条件下,该无功电压控制方法降低了网损,提高了电能质量。
Cai, Li
2015-06-01
Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.
Directory of Open Access Journals (Sweden)
Guillermo Cabrera G.
2012-01-01
Full Text Available We present a hybridization of two different approaches applied to the well-known Capacitated Facility Location Problem (CFLP. The Artificial Bee algorithm (BA is used to select a promising subset of locations (warehouses which are solely included in the Mixed Integer Programming (MIP model. Next, the algorithm solves the subproblem by considering the entire set of customers. The hybrid implementation allows us to bypass certain inherited weaknesses of each algorithm, which means that we are able to find an optimal solution in an acceptable computational time. In this paper we demonstrate that BA can be significantly improved by use of the MIP algorithm. At the same time, our hybrid implementation allows the MIP algorithm to reach the optimal solution in a considerably shorter time than is needed to solve the model using the entire dataset directly within the model. Our hybrid approach outperforms the results obtained by each technique separately. It is able to find the optimal solution in a shorter time than each technique on its own, and the results are highly competitive with the state-of-the-art in large-scale optimization. Furthermore, according to our results, combining the BA with a mathematical programming approach appears to be an interesting research area in combinatorial optimization.
Super-Resolution and De-convolution for Single/Multi Gray Scale Images Using SIFT Algorithm
Ritu Soni; Siddharth Singh Chouhan
2014-01-01
This paper represent a Blind algorithm that restore the blurred images for single image and multi-image blur de-convolution and multi-image super-resolution on low-resolution images deteriorated by additive white Gaussian noise ,the aliasing and linear space-invariant. Image De-blurring is a field of Image Processing in which recovering an original and sharp image from a corrupted image. Proposed method is based on alternating minimization algorithm with respect to unidentifie...
Institute of Scientific and Technical Information of China (English)
胡鹏飞; 林志勇; 周月宾; 江道灼; 梁一桥
2014-01-01
In high-voltage and large-power applications,the number of sub-modules in large-scale modular multilevel converters (MMCs) is huge,which makes the control system hardware design complicated and coordinated control difficult.In order to solve this problem,a control system consisting of one central control unit and several arm control units is proposed.Each arm control unit,in turn,consists of several valve-group control units,each of which j ust controls a few sub-modules.With the tasks of the central control unit and arm control units effectively reduced,the distributed control system is easily implemented and extended.In view of the proposed distributed control system with multiple control units,a new voltage-balancing method is divided into two parts,i.e.the intervalve-group voltage balance and the intravalve-group voltage balance.Finally,a three-phase 41-level MMC prototype is designed to verify the proposed control system and the voltage balancing method.It is shown by experimental results that the proposed control system is feasible and the proposed voltage balancing method is valid.%在高压大功率应用场合，模块化多电平换流器(MMC )每个桥臂子模块数量较多，造成其控制系统硬件构成复杂、协调控制困难。文中提出一种适用于大规模 MMC 的模块化结构的分布式控制系统：该控制系统由一个主控制单元和若干个桥臂控制单元构成，每个桥臂控制单元又由若干个阀组控制单元组成，每个阀组控制单元只控制较少数量的子模块。该结构有效降低了主控制单元和桥臂控制单元的控制压力，易于实现与扩展。针对该控制系统，提出一种包含阀组间均压和阀组内均压的新型均压策略。在三相41电平MMC实验样机上对所提出的控制系统及均压策略进行了验证，实验结果证明了所述控制系统的可行性及均压策略的有效性。
Experimental evaluation of envelope tracking techniques for voltage disturbances
Energy Technology Data Exchange (ETDEWEB)
Marei, Mostafa I. [Electrical Power and Machines Dept., Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbasia, 11517 Cairo (Egypt); El-Saadany, Ehab F.; Salama, Magdy M.A. [Electrical and Computer Engineering Dept., University of Waterloo, 200 University Avenue West, Waterloo, ON (Canada)
2010-03-15
In this paper a digital signal processor (DSP) based real time voltage envelope tracking system is developed and examined. The ADAptive LINEar neuron (ADALINE) and the Recursive Least Square (RLS) algorithms are adopted for envelope tracking. The proposed ADALINE and RLS algorithms give accurate results even under rapid dynamic changes. The paper investigates the effects of different parameters on the performance of the ADALINE algorithm and that of the RLS algorithm. The experimental system is cantered around a Texas Instrument 16 bit fixed-point arithmetic (TMS320LF2407A) evaluation board. Both the ADALINE and the RLS tracking algorithms are developed using the DSP-assembly language. A simple voltage flicker generator is implemented to produce various voltage disturbances. Extensive tests of the proposed envelope tracking algorithms are conducted to evaluate their dynamic performance. (author)
Institute of Scientific and Technical Information of China (English)
于心宇; 魏应冬; 姜齐荣
2014-01-01
传统的两电平空间矢量脉宽调制(space vector pulse width modulation，SVPWM)算法流程复杂，计算量较大。从传统算法的伏秒平衡方程出发，推导并求解了三相上开关器件脉宽时间所满足的线性方程组，提出了一种无需坐标变换和扇区判断的两电平 SVPWM 简化算法。该算法流程简单，计算量少，且在参考电压含不对称分量时仍与传统算法等价。在此基础上，针对采用同步 SVPWM 的两电平变流器，提出了一种电压谐波分析方法，给出了在不同零矢量分配方式下通用的电压谐波频谱计算公式，并定性分析了电压谐波的频谱分布特性。基于PSCAD/EMTDC的仿真结果验证了所提的两电平 SVPWM 简化算法和电压谐波分析方法的准确性。%The traditional algorithm for 2-level space vector pulse width modulation (SVPWM) is complicated with large computation. In this paper, based on the volt-second balance equation of traditional algorithm, the linear equations of the pulse time of switching devices in 3-phase upper arms are set up and solved, thus a simplified algorithm for 2-level SVPWM without coordinate transformation and sector judgment is proposed, which reduces a lot of computation and is much easier to implement. Based on the simplified algorithm, a method of voltage harmonic analysis for synchronous SVPWM is proposed, which presents the generic expressions of the voltage harmonic spectrum when the distribution of zero vectors is varied, and explains the distribution characteristics of voltage harmonics qualitatively. The accuracy of the simplified algorithm and the novel method of voltage harmonic analysis for 2-level SVPWM are verified by the simulation results based on PSCAD/EMTDC.
Improved Phasor Estimation Method for Dynamic Voltage Restorer Applications
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Farhangi, Shahrokh; Iman-Eini, Hossein;
2015-01-01
The dynamic voltage restorer (DVR) is a series compensator for distribution system applications, which protects sensitive loads against voltage sags by fast voltage injection. The DVR must estimate the magnitude and phase of the measured voltages to achieve the desired performance. This paper...... proposes a phasor parameter estimation algorithm based on a recursive variable and fixed data window least error squares (LES) method for the DVR control system. The proposed algorithm, in addition to decreasing the computational burden, improves the frequency response of the control scheme based...... on the fixed data window LES method. The DVR control system based on the proposed algorithm provides a better compromise between the estimation speed and accuracy of the voltage and current signals and can be implemented using a simple and low-cost processor. The results of the studies indicate...
Institute of Scientific and Technical Information of China (English)
颜伟; 高强; 余娟; 杜跃明
2011-01-01
According to the principles of hierarchical and partitioned balance and local compensation of reactive power as well as the principle of contrary regulation of voltage, a method for voltage and reactive power regulation in transmission network is proposed. Firstly, the concepts such as partition in the same hierarchy and its load factor, reactive power regulation ability and unbalancedness degree of reactive power are defined and used to determine ideal targets of reactive power balance and contrary regulation of voltage and to evaluate reactive power balance level. On this basis, a hierarchical and partitioned regulation strategy of voltage and reactive power is put forward, and the regulation strategy consists of three stages: global voltage and reactive power regulation, hierarchical and partitioned regulation of voltage and reactive power and local regulation of voltage and ractive power of teminal substaions. The features of power flow in every single stage are analyzed and the regulation rules of voltage and reactive power in each stage are drafted for the aims of making the voltage of whole network conforming to the guide as well as implementing hierarchical and partitioned reactive power balance as possible. The availability of the proposed algurithm is verified by the results of calculation example.%基于无功的分层分区平衡与就地补偿原则、中枢点电压的逆调压原则,提出了输电网络电压无功调节方法.首先定义了同层区及其负载率、无功调节能力和无功不平衡度等概念,以此来确定无功平衡与逆调压的理想目标,评估电网的无功平衡水平.在此基础上提出了分层分区的电压无功调节策略,该策略包括3个阶段:1)全局电压无功调节;2)无功的分层分区平衡调节;3)终端变电站的局部电压与无功调节.分析了每个阶段的潮流特征,制定了各阶段的电压无功调节规则,尽可能实现全网的电压合格和分层分区的无功平衡.
Teng, Yun; Li, Lee; Liu, Yun-Long; Liu, Lun; Liu, Minghai
2014-10-01
This paper introduces a method to generate large-scale diffuse plasmas by using a repetition nanosecond pulse generator and a parallel array wire-electrode configuration. We investigated barrier-free diffuse plasmas produced in the open air in parallel and cross-parallel array line-line electrode configurations. We found that, when the distance between the wire-electrode pair is small, the discharges were almost extinguished. Also, glow-like diffuse plasmas with little discharge weakening were obtained in an appropriate range of line-line distances and with a cathode-grounding cross-electrode configuration. As an example, we produced a large-scale, stable diffuse plasma with volumes as large as 18 × 15 × 15 cm3, and this discharge region can be further expanded. Additionally, using optical and electrical measurements, we showed that the electron temperature was higher than the gas temperature, which was almost the same as room temperature. Also, an array of electrode configuration with more wire electrodes had helped to prevent the transition from diffuse discharge to arc discharge. Comparing the current waveforms of configurations with 1 cell and 9 cells, we found that adding cells significantly increased the conduction current and the electrical energy delivered in the electrode gaps.
Technological Aspects: High Voltage
Faircloth, D C
2013-01-01
This paper covers the theory and technological aspects of high-voltage design for ion sources. Electric field strengths are critical to understanding high-voltage breakdown. The equations governing electric fields and the techniques to solve them are discussed. The fundamental physics of high-voltage breakdown and electrical discharges are outlined. Different types of electrical discharges are catalogued and their behaviour in environments ranging from air to vacuum are detailed. The importance of surfaces is discussed. The principles of designing electrodes and insulators are introduced. The use of high-voltage platforms and their relation to system design are discussed. The use of commercially available high-voltage technology such as connectors, feedthroughs and cables are considered. Different power supply technologies and their procurement are briefly outlined. High-voltage safety, electric shocks and system design rules are covered.
A Direct Voltage Unbalance Compensation Strategy for Islanded Microgrids
DEFF Research Database (Denmark)
Zhao, Xin; Wu, Xiaohua; Meng, Lexuan
2015-01-01
In this paper, a control strategy with low bandwidth communications for paralleled three-phase inverters is proposed to achieve satisfactory voltage unbalance compensation. The proposed control algorithm mainly consists of voltage/current inner loop controllers, a droop controller, a selective......-leg inverters was tested in order to validate the proposed control strategy....
Energy Technology Data Exchange (ETDEWEB)
Jamali, B.; Piercy, R.; Dick, P. [Kinetrics Inc., Toronto, ON (Canada). Transmission and Distribution Technologies
2008-04-09
This report discussed issues related to farm stray voltage and evaluated mitigation strategies and costs for limiting voltage to farms. A 3-phase, 3-wire system with no neutral ground was used throughout North America before the 1930s. Transformers were connected phase to phase without any electrical connection between the primary and secondary sides of the transformers. Distribution voltage levels were then increased and multi-grounded neutral wires were added. The earth now forms a parallel return path for the neutral current that allows part of the neutral current to flow continuously through the earth. The arrangement is responsible for causing stray voltage. Stray voltage causes uneven milk production, increased incidences of mastitis, and can create a reluctance to drink water amongst cows when stray voltages are present. Off-farm sources of stray voltage include phase unbalances, undersized neutral wire, and high resistance splices on the neutral wire. Mitigation strategies for reducing stray voltage include phase balancing; conversion from single to 3-phase; increasing distribution voltage levels, and changing pole configurations. 22 refs., 5 tabs., 13 figs.
Rizk, Farouk AM
2014-01-01
Inspired by a new revival of worldwide interest in extra-high-voltage (EHV) and ultra-high-voltage (UHV) transmission, High Voltage Engineering merges the latest research with the extensive experience of the best in the field to deliver a comprehensive treatment of electrical insulation systems for the next generation of utility engineers and electric power professionals. The book offers extensive coverage of the physical basis of high-voltage engineering, from insulation stress and strength to lightning attachment and protection and beyond. Presenting information critical to the design, selec
Kind, Dieter
2001-01-01
The second edition of High Voltage Test Techniques has been completely revised. The present revision takes into account the latest international developments in High Voltage and Measurement technology, making it an essential reference for engineers in the testing field.High Voltage Technology belongs to the traditional area of Electrical Engineering. However, this is not to say that the area has stood still. New insulating materials, computing methods and voltage levels repeatedly pose new problems or open up methods of solution; electromagnetic compatibility (EMC) or components and systems al
Mustafa, Ghullam; Bak-Jensen, Birgitte; Mahat, Pukar; Cecati, Carlo
2013-01-01
Any problem with voltage in a power network is undesirable as it aggravates the quality of the power. Power electronic devices such as Voltage Source Converter (VSC) based Static Synchronous Compensator (STATCOM) etc. can be used to mitigate the voltage problems in the distribution system. The voltage problems dealt with in this paper are to show how to mitigate unbalanced voltage sags and voltage unbalance in the CIGRE Low Voltage (LV) test network and net-works like this. The voltage unbala...
Institute of Scientific and Technical Information of China (English)
Min YUAN; Bing-xin YANG; Yi-de MA‡; Jiu-wen ZHANG; Fu-xiang LU; Tong-feng ZHANG
2015-01-01
Recently, dictionary learning (DL) based methods have been introduced to compressed sensing magnetic resonance imaging (CS-MRI), which outperforms pre-defined analytic sparse priors. However, single-scale trained dictionary directly from image patches is incapable of representing image features from multi-scale, multi-directional perspective, which influences the reconstruction performance. In this paper, incorporating the superior multi-scale properties of uniform discrete curvelet transform (UDCT) with the data matching adaptability of trained dictionaries, we propose a flexible sparsity framework to allow sparser representation and prominent hierarchical essential features capture for magnetic resonance (MR) images. Multi-scale decompo-sition is implemented by using UDCT due to its prominent properties of lower redundancy ratio, hierarchical data structure, and ease of implementation. Each sub-dictionary of different sub-bands is trained independently to form the multi-scale dictionaries. Corresponding to this brand-new sparsity model, we modify the constraint splitting augmented Lagrangian shrinkage algorithm (C-SALSA) as patch-based C-SALSA (PB C-SALSA) to solve the constraint optimization problem of regularized image recon-struction. Experimental results demonstrate that the trained sub-dictionaries at different scales, enforcing sparsity at multiple scales, can then be efficiently used for MRI reconstruction to obtain satisfactory results with further reduced undersampling rate. Multi-scale UDCT dictionaries potentially outperform both single-scale trained dictionaries and multi-scale analytic transforms. Our proposed sparsity model achieves sparser representation for reconstructed data, which results in fast convergence of recon-struction exploiting PB C-SALSA. Simulation results demonstrate that the proposed method outperforms conventional CS-MRI methods in maintaining intrinsic properties, eliminating aliasing, reducing unexpected artifacts, and removing
一种小规模多种群萤火虫群优化算法%Small-scale and multi-population glowworm swarm optimization algorithm
Institute of Scientific and Technical Information of China (English)
祝华正; 何登旭
2011-01-01
针对基本萤火虫群优化算法在求解多极值函数问题时,随着极值点增多,收敛速度低、精度不高的缺陷,提出了一种小规模多种群的改进萤火虫群算法,实验仿真表明,改进后的萤火虫群算法在求解多极值函数优化问题时,所花时间明显减少且精度也得到了提高.%With the increase of the extreme points, the convergence speed and the computing accuracy of the Glowworm Swarm Optimization(GSO) algorithm are low and not high.Aiming at the shortcomings of the GSO algorithm,this paper proposes a new improved algorithm of small-scale and Multi-Population Glowworm Swarm Optimization(MPGSO).It is shown by simulation that,compared with GSO,the improved algorithm for solving multi-modal functions can not only obviously reduce the computing time,but also improve the computing accuracy.
Institute of Scientific and Technical Information of China (English)
OU Xiaojuan; ZHOU Wei
2007-01-01
Global positioning system (GPS)common-view observation data were processed by using the multi-scale Kalman algorithm based on a correlative structure of the discrete wavelet coefficients.Suppose that the GPS common-view observation data has the 1/f fractal characteristic,the algorithm of wavelet transform was used to estimate the Hurst parameter H of GPS clock difference data.When 0＜H＜1,the 1/f fractal characteristic of the GPS clock difference data iS a Gaussian zero-mean and non-stationary stochastic process.Thus,the discrete wavelet coefficients can be discussed in the process of estimating multi-scale Kalman coefficients.Furthermore,the discrete clock difierence can be estimated.The single-channel and multi-channel common-view observation data were processed respectively.Comparisons were made between the results obtained and the Circular T data.Simulation results show that the algorithm discussed in this paper is both feasible and effective.
Directory of Open Access Journals (Sweden)
Yuanchang Zhong
2014-01-01
Full Text Available The typical application backgrounds of large-scale WSN (wireless sensor networks for the water environment monitoring in the Three Gorges Reservoir are large coverage area and wide distribution. To maximally prolong lifetime of large-scale WSN, a new energy-saving routing algorithm has been proposed, using the method of maximum energy-welfare optimization clustering. Firstly, temporary clusters are formed based on two main parameters, the remaining energy of nodes and the distance between a node and the base station. Secondly, the algorithm adjusts cluster heads and optimizes the clustering according to the maximum energy-welfare of the cluster by the cluster head shifting mechanism. Finally, in order to save node energy efficiently, cluster heads transmit data to the base station in single-hop and multihop way. Theoretical analysis and simulation results show that the proposed algorithm is feasible and advanced. It can efficiently save the node energy, balance the energy dissipation of all nodes, and prolong the network lifetime.
CAPACITOR VOLTAGE INCREMENT CONTROL OF TCSC TRIGGERING ALGORITHMS%可控串补晶闸管阀触发控制的电容电压增量控制算法
Institute of Scientific and Technical Information of China (English)
周孝信; 李亚健; 武守远; 曾昭华
2001-01-01
根据TCSC的数学模型，研究了电容电压增量的特点和性质，提出了电容电压增量控制的触发算法。该算法中控制量计算所用的参数均取自TCSC装置本身固定参数和运行参数的实测值，与TCSC安装位置和电力系统的运行方式无关。采用该算法的TCSC在实现阻抗控制的同时，可以对次同步分量快速调制，实现抑制SSR作用。%A universal conceptual description for the vernier control of TCSC is described in this paper. Based on the mathematical model of TCSC, a new vernier control scheme for TCSC controller,capacitor voltage increment control, which is independent of the parameters of transmission system and the generator-turbines, is obtained. The triggering time instant is chosen based on measured capacitor voltage and line current signals. And by predicting an upcoming firing instant,the capacitor voltage increment control can both improve the response speed for reactance control and modulate the capacitor voltage significantly in SSR condition. Therefore, this scheme can mitigate SSR to some degree inherently.
Directory of Open Access Journals (Sweden)
Qiuyu Wang
2014-01-01
descent method at first finite number of steps and then by conjugate gradient method subsequently. Under some appropriate conditions, we show that the algorithm converges globally. Numerical experiments and comparisons by using some box-constrained problems from CUTEr library are reported. Numerical comparisons illustrate that the proposed method is promising and competitive with the well-known method—L-BFGS-B.
DEFF Research Database (Denmark)
Das, Bhagwan; Abdullah, M.F.L.; Hussain, Dil muhammed Akbar
2017-01-01
Port (7.0) need be reduced. In this paper, a power efficient design for DisplayPort (7.0) is proposed using LVDS IO Standard. The proposed design is tested for different frequencies; 500 MHz, 700 MHz, 1.0 GHz, and 1.6 GHz. The design is implemented using vhdl in UltraScale FPGA. It is determined...... the designed vhdl based design of DisplayPort (7.0) can reduced 92% using LVDS IO Standard for all frequencies; 500 MHz, 700 MHz, 1.0 GHz, and 1.6 GHz, compared to vhdl based design of DisplayPort (7.0) without using IO Standard. The proposed design of vhdl based design of DisplayPort (7.0) using LVDS IO...... Standard offers no power consumption for DisplayPort (7.0) in standby mode. The vhdl based design of DisplayPort (7.0) using LVDS IO Standard will be helpful to process the high resolution video at low power consumption...
Fast voltage stability assessment and reinforcement in an interconnected power system
Hsiao, Wen-Ta
1998-12-01
It is believed that voltage stability analysis will be more difficult due to the full utilization of transmission systems and the growth of inter-utility power transfer. An online voltage stability analyzing system which can be incorporated into the EMS to deal with the threats of suddenly arising voltage collapses is presented. Operating margin prediction, voltage stability assessment and reinforcement are three major functions. Two predicting methods are proposed to calculate the operating margin according to current operating condition and the anticipative system state. A fast risk indicator based on the saddle-node bifurcation theory is designed to predict the proximity of a system to voltage collapse. A novel CPF method which can trace the power flow solution path through the nose point without notorious numerical difficulties is presented. Speed is the advantage of former method, while accuracy is the important feature of latter one. Voltage stability assessment is required to predict steady-state conditions of a system following a large number of anticipated transmission branch or generator outages. An efficient and simple method based on voltage sensitivity changing rates is proposed for quickly identify the weak buses in a large-scale system. An effective contingency selection function relying on search algorithms built into power flow solutions is designed to filter out most of harmless contingencies for system operators who are working with rapidly changing load/generation patterns and a wide variety of operating conditions. A contingency evaluation function having the ability to deal with real-time numerous contingencies in a very short period of time is utilized to find high-severity contingencies. Var compensation and load shedding are two remedial measures of reinforcement function. Suitable var compensation scheme has three contributions: extending operating margin to avoid voltage collapses, fully utilizing the transmission infrastructure to earn
Improved Multi-scale Retinex Algorithm and Its Application%改进的多尺度Retinex算法及其应用
Institute of Scientific and Technical Information of China (English)
赵晓霞; 王汝琳
2011-01-01
在经典的多尺度Refinex算法中对Refinex输出采用一个常数增益,使图像在平滑区域和高对比度边缘出现过增强,导致噪声放大和边缘晕环.针对该问题,提出改进MSR算法,对Retinex输出采用自适应空间变化增益,平滑区域和高对比度边缘增益小,细节区域增益大,并且小尺度Retinex输出不同区域增益差大,而大尺度Retinex输出不同区域增益差小,从而使图像细节更清晰,同时场景轮廓和颜色呈现更自然.将该算法用于受到严重退化的雾天图像,能取得较好的图像去雾效果.%In the standard multi-scale Retinex algorithm, a constant gain is applied to a Retinex output, which leads to overenhancement in smooth and edge regions, in which noise amplification and ringing artifacts take place, respectively. An improved Multi-Scale Retinex(MSR) algorithm is proposed by applying the adaptive space varying gain, which means larger gain is applied to pixels in smooth and edge regions while smaller gain is applied to pixels in detail regions. Meanwhile, the gain difference is larger between pixels of Retinex output associated with a small Gaussian surround space constants while the gain difference is small between pixels of Retinex output associated with a large Gaussian surround space constants. When the proposed algorithm is applied to images severely degraded by fog, experiments show that the algorithm can effectively remove fog degradation from color images.
Martin, Edward J.
2008-01-15
A voltage verification unit and method for determining the absence of potentially dangerous potentials within a power supply enclosure without Mode 2 work is disclosed. With this device and method, a qualified worker, following a relatively simple protocol that involves a function test (hot, cold, hot) of the voltage verification unit before Lock Out/Tag Out and, and once the Lock Out/Tag Out is completed, testing or "trying" by simply reading a display on the voltage verification unit can be accomplished without exposure of the operator to the interior of the voltage supply enclosure. According to a preferred embodiment, the voltage verification unit includes test leads to allow diagnostics with other meters, without the necessity of accessing potentially dangerous bus bars or the like.
Motamedian, Ehsan; Mohammadi, Maryam; Shojaosadati, Seyed Abbas; Heydari, Mona
2017-04-01
Integration of different biological networks and data-types has been a major challenge in systems biology. The present study introduces the transcriptional regulated flux balance analysis (TRFBA) algorithm that integrates transcriptional regulatory and metabolic models using a set of expression data for various perturbations. TRFBA considers the expression levels of genes as a new continuous variable and introduces two new linear constraints. The first constraint limits the rate of reaction(s) supported by a metabolic gene using a constant parameter (C) that converts the expression levels to the upper bounds of the reactions. Considering the concept of constraint-based modeling, the second set of constraints correlates the expression level of each target gene with that of its regulating genes. A set of constraints and binary variables was also added to prevent the second set of constraints from overlapping. TRFBA was implemented on Escherichia coli and Saccharomyces cerevisiae models to estimate growth rates under various environmental and genetic perturbations. The error sensitivity to the algorithm parameter was evaluated to find the best value of C. The results indicate a significant improvement in the quantitative prediction of growth in comparison with previously presented algorithms. The robustness of the algorithm to change in the expression data and the regulatory network was tested to evaluate the effect of noisy and incomplete data. Furthermore, the use of added constraints for perturbations without their gene expression profile demonstrates that these constraints can be applied to improve the growth prediction of FBA. TRFBA is implemented in Matlab software and requires COBRA toolbox. Source code is freely available at http://sbme.modares.ac.ir . : motamedian@modares.ac.ir. Supplementary data are available at Bioinformatics online.
Mahoney, Michael W; Carlsson, Gunnar E
2008-01-01
The 2008 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2008), sponsored by the NSF, DARPA, LinkedIn, and Yahoo!, was held at Stanford University, June 25--28. The goals of MMDS 2008 were (1) to explore novel techniques for modeling and analyzing massive, high-dimensional, and nonlinearly-structured scientific and internet data sets; and (2) to bring together computer scientists, statisticians, mathematicians, and data analysis practitioners to promote cross-fertilization of ideas.
Tamascelli, D; Plenio, M B
2015-01-01
When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the Time-Evolving Block-Decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the Singular Value Decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied to and demonstrate that for those system RRSVD delivers results as accurate as state-of-the-art deterministi...
SUPER-RESOLUTION AND DE-CONVOLUTION FOR SINGLE/MULTI GRAY SCALE IMAGES USING SIFT ALGORITHM
Directory of Open Access Journals (Sweden)
Ritu Soni
2015-10-01
Full Text Available This paper represent a Blind algorithm that restore the blurred images for single image and multi-image blur de-convolution and multi-image super-resolution on low-resolution images deteriorated by additive white Gaussian noise ,the aliasing and linear space-invariant. Image De-blurring is a field of Image Processing in which recovering an original and sharp image from a corrupted image. Proposed method is based on alternating minimization algorithm with respect to unidentified blurs and high-resolution image and the Huber-markov random field(HMRF model for its ability to preserve discontinuities of a image and used for the regularization that exploits the piecewise smooth nature of the HR image. SIFT algorithm is used for feature extraction in a image and produce matching features based on Euclidean distance of their feature vectors that help in calculation of PSF. For blur estimation, edge-emphasizing smoothing operation is used to improve the quality of blur by enhancing the strong soft edges. In filter domain the blur estimation process can be done rather than the pixel domain for better performance that means which uses the gradient of HR and LR images for better performance.
Directory of Open Access Journals (Sweden)
Duka Adrian-Vasile
2011-12-01
Full Text Available This paper examines the development of a genetic adaptive fuzzy control system for the Inverted Pendulum. The inverted pendulum is a classical problem in Control Engineering, used for testing different control algorithms. The goal is to balance the inverted pendulum in the upright position by controlling the horizontal force applied to its cart. Because it is unstable and has a complicated nonlinear dynamics, the inverted pendulum is a good testbed for the development of nonconventional advanced control techniques. Fuzzy logic technique has been successfully applied to control this type of system, however most of the time the design of the fuzzy controller is done in an ad-hoc manner, and choosing certain parameters (controller gains, membership functions proves difficult. This paper examines the implementation of an adaptive control method based on genetic algorithms (GA, which can be used on-line to produce the adaptation of the fuzzy controller’s gains in order to achieve the stabilization of the pendulum. The performances of the proposed control algorithms are evaluated and shown by means of digital simulation.
Model selection for SVM using mutative scale chaos optimization algorithm%变尺度混沌优化支持向量机模型选择
Institute of Scientific and Technical Information of China (English)
刘清坤; 阙沛文; 费春国; 宋寿鹏
2006-01-01
This paper proposes a new search strategy using mutative scale chaos optimization algorithm (MSCO) for model selection of support vector machine (SVM). It searches the parameter space of SVM with a very high efficiency and finds the optimum parameter setting for a practical classification problem with very low time cost. To demonstrate the performance of the proposed method it is applied to model selection of SVM in ultrasonic flaw classification and compared with grid search for model selection. Experimental results show that MSCO is a very powerful tool for model selection of SVM, and outperforms grid search in search speed and precision in ultrasonic flaw classification.
Design of New Single-phase Multilevel Voltage Source Inverter
Directory of Open Access Journals (Sweden)
Rasoul Shalchi Alishah
2014-07-01
Full Text Available Multilevel inverters with more number of levels can produce high quality voltage waveforms. In this paper, a new single-phase structure for multilevel voltage source inverter is proposed which can generate a large number of levels with reduced number of IGBTs, gate driver circuits and diodes. Three algorithms for determination of dc voltage sources’ magnitudes are presented which provide odd and even levels at the output voltage waveform. A comparison is presented between proposed multilevel inverter and conventional cascade topology. The proposed topology is analyzed by the experimental and simulation results.
The Control Unit of a Single Phase Voltage Regulator
Colak, Ilknur
2010-01-01
Supplying regulated voltage to critical loads is an important topic for several years. This paper presents a single-phase electronic voltage regulator based on high frequency switching of an isolated transformer where primary side voltage is controlled by two full-bridge converters sharing a common DC bus and operating at 50Hz and 20kHz switching frequencies. This allows 50Hz induced voltage on the primary side of the transformer, regulated by high frequency switching. Depending on the input voltage, voltage at the secondary side of the transformer add to (boost mode) or subtract (buck mode) from the supply voltage, therefore, maintaining a regulated voltage value across the load. The regulator is controlled by a digital controller allowing fast dynamic response. A 5kVA single-phase voltage regulator is realized to verify the operation of the proposed algorithm. The experimental results show that regulator maintains constant voltage across the load both in step-up (low supply voltage) and step-down (high supp...
Roohi, Ehsan; Stefanov, Stefan
2016-11-01
This paper reviews the accuracy of the Simplified Bernoulli Trial (SBT) algorithm and its variants, i.e., SBT-TAS (SBT on transient adaptive subcells) and ISBT (intelligence SBT) in the simulation of a wide spectrum of rarefied flow problems, including collision frequency ratio evaluation in the equilibrium condition, comparison of the Sonine-polynomial coefficients prediction in the Fourier flow with the theoretical prediction of the Chapman-Enskog expansion, accurate wall heat flux solution for the Fourier flow in the early slip regime, and hypersonic flows over cylinder and biconic geometries. We summarize advantages and requirements that utilization of the SBT collision families brings to a typical DSMC solver.
Institute of Scientific and Technical Information of China (English)
徐杨军; 许建平; 王金平; 何圣仲
2014-01-01
对动态电压调节(Dynamic Voltage Scaling,DVS)开关变换器的定频导通时间(Constant Frequency Turn-On Time,CFOT)控制技术进行了研究,在恒定导通时间(Constant On-Time,COT)基础上,通过引入输入电压前馈和输出参考电压反馈环路,实现不同输入、输出电压条件下DVS开关变换器保持开关频率恒定.研究结果表明,CFOT控制不仅继承了传统COT控制环路设计简单、无需误差放大器及其相应的补偿网路、瞬态响应快的优点,而且消除了输入、输出电压变动对开关频率的影响.
Liu, Jinxing
2013-04-24
When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).
Norbeck, Edwin; Miller, Michael; Onel, Yasar
2010-11-01
For detector arrays that require 5 to 10 kV at a few microamps each for hundreds of detectors, using hundreds of HV power supplies is unreasonable. Bundles of hundreds of HV cables take up space that should be filled with detectors. A typical HV module can supply 1 ma, enough current for hundreds of detectors. It is better to use a single HV module and distribute the current as needed. We show a circuit that, for each detector, measures the current, cuts off the voltage if the current exceeds a set maximum, and allows the HV to be turned on or off from a control computer. The entire array requires a single HV cable and 2 or 3 control lines. This design provides the same voltage to all of the detectors, the voltage set by the single HV module. Some additional circuitry would allow a computer controlled voltage drop between the HV and each individual detector.
High voltage engineering fundamentals
Kuffel, E; Hammond, P
1984-01-01
Provides a comprehensive treatment of high voltage engineering fundamentals at the introductory and intermediate levels. It covers: techniques used for generation and measurement of high direct, alternating and surge voltages for general application in industrial testing and selected special examples found in basic research; analytical and numerical calculation of electrostatic fields in simple practical insulation system; basic ionisation and decay processes in gases and breakdown mechanisms of gaseous, liquid and solid dielectrics; partial discharges and modern discharge detectors; and over
Glyavin, M. Yu.; Zavolskiy, N. A.; Sedov, A. S.; Nusinovich, G. S.
2013-03-01
For a long time, the gyrotrons were primarily developed for electron cyclotron heating and current drive of plasmas in controlled fusion reactors where a multi-megawatt, quasi-continuous millimeter-wave power is required. In addition to this important application, there are other applications (and their number increases with time) which do not require a very high power level, but such issues as the ability to operate at low voltages and have compact devices are very important. For example, gyrotrons are of interest for a dynamic nuclear polarization, which improves the sensitivity of the nuclear magnetic resonance spectroscopy. In this paper, some issues important for operation of gyrotrons driven by low-voltage electron beams are analyzed. An emphasis is made on the efficiency of low-voltage gyrotron operation at the fundamental and higher cyclotron harmonics. These efficiencies calculated with the account for ohmic losses were, first, determined in the framework of the generalized gyrotron theory based on the cold-cavity approximation. Then, more accurate, self-consistent calculations for the fundamental and second harmonic low-voltage sub-THz gyrotron designs were carried out. Results of these calculations are presented and discussed. It is shown that operation of the fundamental and second harmonic gyrotrons with noticeable efficiencies is possible even at voltages as low as 5-10 kV. Even the third harmonic gyrotrons can operate at voltages about 15 kV, albeit with rather low efficiency (1%-2% in the submillimeter wavelength region).
Optimal coordinated voltage control of power systems
Institute of Scientific and Technical Information of China (English)
LI Yan-jun; HILL David J.; WU Tie-jun
2006-01-01
An immune algorithm solution is proposed in this paper to deal with the problem of optimal coordination of local physically based controllers in order to preserve or retain mid and long term voltage stability. This problem is in fact a global coordination control problem which involves not only sequencing and timing different control devices but also tuning the parameters of controllers. A multi-stage coordinated control scheme is presented, aiming at retaining good voltage levels with minimal control efforts and costs after severe disturbances in power systems. A self-pattern-recognized vaccination procedure is developed to transfer effective heuristic information into the new generation of solution candidates to speed up the convergence of the search procedure to global optima. An example of four bus power system case study is investigated to show the effectiveness and efficiency of the proposed algorithm, compared with several existing approaches such as differential dynamic programming and tree-search.
Device for monitoring cell voltage
Doepke, Matthias [Garbsen, DE; Eisermann, Henning [Edermissen, DE
2012-08-21
A device for monitoring a rechargeable battery having a number of electrically connected cells includes at least one current interruption switch for interrupting current flowing through at least one associated cell and a plurality of monitoring units for detecting cell voltage. Each monitoring unit is associated with a single cell and includes a reference voltage unit for producing a defined reference threshold voltage and a voltage comparison unit for comparing the reference threshold voltage with a partial cell voltage of the associated cell. The reference voltage unit is electrically supplied from the cell voltage of the associated cell. The voltage comparison unit is coupled to the at least one current interruption switch for interrupting the current of at least the current flowing through the associated cell, with a defined minimum difference between the reference threshold voltage and the partial cell voltage.
Fasching, George E.
1977-03-08
An improved high-voltage pulse generator has been provided which is especially useful in ultrasonic testing of rock core samples. An N number of capacitors are charged in parallel to V volts and at the proper instance are coupled in series to produce a high-voltage pulse of N times V volts. Rapid switching of the capacitors from the paralleled charging configuration to the series discharging configuration is accomplished by using silicon-controlled rectifiers which are chain self-triggered following the initial triggering of a first one of the rectifiers connected between the first and second of the plurality of charging capacitors. A timing and triggering circuit is provided to properly synchronize triggering pulses to the first SCR at a time when the charging voltage is not being applied to the parallel-connected charging capacitors. Alternate circuits are provided for controlling the application of the charging voltage from a charging circuit to be applied to the parallel capacitors which provides a selection of at least two different intervals in which the charging voltage is turned "off" to allow the SCR's connecting the capacitors in series to turn "off" before recharging begins. The high-voltage pulse-generating circuit including the N capacitors and corresponding SCR's which connect the capacitors in series when triggered "on" further includes diodes and series-connected inductors between the parallel-connected charging capacitors which allow sufficiently fast charging of the capacitors for a high pulse repetition rate and yet allow considerable control of the decay time of the high-voltage pulses from the pulse-generating circuit.
Directory of Open Access Journals (Sweden)
LI Hui
2015-07-01
Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.
Hibert, Clement; Malet, Jean-Philippe; Provost, Floriane; Michéa, David; Geertsema, Marten
2017-04-01
Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with volumes below one millions of cubic meters. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. We present here the preliminary results of the application of this processing chain in two contexts: i) In Himalaya with the data acquired between 2002 and 2005 by the Hi-Climb network; ii) In Alaska using data recorded by the
Institute of Scientific and Technical Information of China (English)
彭茂兰; 赵成勇; 刘兴华; 郭春义
2014-01-01
The capacitor voltage balancing of modular multilevel converter (MMC) determines the stable operation of MMC based high voltage direct current (HVDC) transmission system. When the number of sub-modules (SMs) is excessive, it requires a large amount of computation time, which poses a challenge to the design of the physical controller. To solve this problem, this paper proposed an improved capacitor voltage balancing method. The improved method grouped the SMs to reduce the computing quantity of sorting the capacitor voltages, and maintained the voltage balancing between groups by adopting the voltage balancing algorithm between groups. Then the optimization method inspired by the prime factorization principle is put forward. A MMC-HVDC model was developed in RT-Lab. The simulation results show that the improved method and the optimization method can both reduce the amount of computation and improve the simulation speed significantly with the balancing of capacitor voltages. All those results verify the effectiveness and feasibility of the improved method and the optimization method.%模块化多电平换流器(modular multilevel converter， MMC)子模块电容均压问题，是MMC型高压直流输电系统稳定运行的关键。当桥臂子模块数过多时，采用传统的排序均压算法将使电容电压排序运算量过大，这对控制器的硬件设计带来巨大挑战。针对传统均压算法的问题，提出了一种改进的电容电压平衡方法，通过对子模块分组，减少了电容电压排序运算量，同时采用一种组间电压平衡算法，解决了各组间电压不平衡问题。在此基础上，类比整数质因子分解思想进一步优化，得到电容电压平衡分组排序的最优化方法。在实时数字仿真器RT-L ab中搭建了模块化多电平换流器直流输电系统(modular multilevel converter high voltage direct current，MMC-HVDC)模型进行仿真验证，仿真结果表明，改进方法及最
Directory of Open Access Journals (Sweden)
M. Subhashini,
2014-06-01
Full Text Available To achieve the most energy-efficient operation, this brief presents a circuit design technique for separating the power supply voltage (VDD of flip-flops (FFs from that of combinational circuits, called the higher voltage FF (HVFF. Although VDD scaling can reduce the energy, the minimum operating voltage (VDDmin of FFs prevents the operation at the optimum supply voltage that minimizes the energy, because the VDDmin of FFs is higher than the optimum supply voltage. In HVFF, the VDD of combinational logic gates is reduced below the VDDmin of FFs while keeping the VDD of FFs at their VDDmin. This makes it possible to minimize the energy without power and delay penalties at the nominal supply voltage (1.2 V as well as without FF topological difications. A four bit alu is designed in these paper by using dual supply voltage usig DSCH.
Experimental validation of prototype high voltage bushing
Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.
2017-08-01
Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.
Improved chirp scaling algorithm for bistatic squint SAR imaging%改进的双站SAR线性调频尺度成像算法
Institute of Scientific and Technical Information of China (English)
谭源泉; 张柯; 李捷
2011-01-01
Due to stealth, counterreconnaissance and other advantages, more attention has been paied on bistatic squint Synthetic Aperture Radar(SAR), whereas research on the bistatic transitional variant squint SAR imaging is not enough. This paper proposes an improved Chirp Scaling(CS) algorithm for bistatic transitional variant squint SAR imaging based on the parallel flight mode. The proposed CS algorithm employs a simplified method to make bistatic echo equivalent to monostatic echo, and introduces speed rate factor of transmitter and receiver, so as to image better under circumstance in which speed ratio is no more than 1.5. The validity of the algorithm is verified through simulation and experiment data.%双站合成孔径雷达(SAR)具有隐蔽性和反侦察等优点,但对于双站SAR方位向移变结构的成像研究较少.本文提出了一种基于双站平飞移变模式下改进的双站SAR线性调频尺度成像算法(CS).该CS算法利用了一种将双站回波等效为单站回波处理的简化方法,引入了收发双站速度比因子,可在双站速度比值不超过1.5的情况下较好地成像,并通过仿真和对实际数据的处理验证了算法的有效性.
Voltage Regulators for Photovoltaic Systems
Delombard, R.
1986-01-01
Two simple circuits developed to provide voltage regulation for highvoltage (i.e., is greater than 75 volts) and low-voltage (i.e., is less than 36 volts) photovoltaic/battery power systems. Use of these circuits results in voltage regulator small, low-cost, and reliable, with very low power dissipation. Simple oscillator circuit controls photovoltaic-array current to regulate system voltage and control battery charging. Circuit senses battery (and system) voltage and adjusts array current to keep battery voltage from exceeding maximum voltage.
Directory of Open Access Journals (Sweden)
Markowski Marcin
2017-09-01
Full Text Available In recent years elastic optical networks have been perceived as a prospective choice for future optical networks due to better adjustment and utilization of optical resources than is the case with traditional wavelength division multiplexing networks. In the paper we investigate the elastic architecture as the communication network for distributed data centers. We address the problems of optimization of routing and spectrum assignment for large-scale computing systems based on an elastic optical architecture; particularly, we concentrate on anycast user to data center traffic optimization. We assume that computational resources of data centers are limited. For this offline problems we formulate the integer linear programming model and propose a few heuristics, including a meta-heuristic algorithm based on a tabu search method. We report computational results, presenting the quality of approximate solutions and efficiency of the proposed heuristics, and we also analyze and compare some data center allocation scenarios.
Directory of Open Access Journals (Sweden)
Abdallah Bengueddoudj
2017-05-01
Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.
Studying Voltage Transformer Ferroresonance
Directory of Open Access Journals (Sweden)
Hamid Radmanesh
2012-09-01
Full Text Available This study studies the effect of Circuit Breaker Shunt Resistance (CBSR, Metal Oxide Vaistor (MOV and Neutral earth Resistance (NR on the control of ferroresonance in the voltage transformer. It is expected that NR can controlled ferroresonance better than MOV and CBSR. Study has been done on a one phase voltage transformer rated 100 VA, 275 kV. The simulation results reveal that considering the CBSR and MOV exhibits a great mitigating effect on ferroresonance overvoltages, but these resistances cannot control these phenomena for all range of parameters. By applying NR to the system structure, ferroresonance has been controlled and its amplitude has been damped for all parameters values.
Directory of Open Access Journals (Sweden)
Alfredo Cuzzocrea
2016-01-01
Full Text Available Nowadays, a leading instance of big data is represented by Web data that lead to the definition of so-called big Web data. Indeed, extending beyond to a large number of critical applications (e.g., Web advertisement, these data expose several characteristics that clearly adhere to the well-known 3V properties (i.e., volume, velocity, variety. Resource Description Framework (RDF is a significant formalism and language for the so-called Semantic Web, due to the fact that a very wide family of Web entities can be naturally modeled in a graph-shaped manner. In this context, RDF graphs play a first-class role, because they are widely used in the context of modern Web applications and systems, including the emerging context of social networks. When RDF graphs are defined on top of big (Web data, they lead to the so-called large-scale RDF graphs, which reasonably populate the next-generation Semantic Web. In order to process such kind of big data, MapReduce, an open source computational framework specifically tailored to big data processing, has emerged during the last years as the reference implementation for this critical setting. In line with this trend, in this paper, we present an approach for efficiently implementing traversals of large-scale RDF graphs over MapReduce that is based on the Breadth First Search (BFS strategy for visiting (RDF graphs to be decomposed and processed according to the MapReduce framework. We demonstrate how such implementation speeds-up the analysis of RDF graphs with respect to competitor approaches. Experimental results clearly support our contributions.
Sleeman, J.; Halem, M.; Finin, T.; Cane, M. A.
2016-12-01
topics, we establish which chapter-citation pairs are most similar. We will perform posterior inferences based on Hastings -Metropolis simulated annealing MCMC algorithm to infer, from the evolution of topics starting from AR1 to AR4, assertions of topics for AR5 and potentially AR6.
雾天条件下的多尺度Retinex图像增强算法%Enhance Algorithm for Fog Images Based on Improved Multi-scale Retinex
Institute of Scientific and Technical Information of China (English)
李菊霞; 余雪丽
2013-01-01
When images are shot in fog circumstances, the color and contrast of fog images appear degrade phenomenon. In order to improve the quality of fog image, this paper proposed a fog image enhancement method based on improved multi-scale Retinex algorithm. Firstly, the dynamic range of image was compressed by the power transform, and then nonlinear transform was used to suppress the high light area of image, finally the unsharp mask filtering was used to e-liminate fuzzy to enhance the detail information of fog image,and a few fog images were used to test the performance of the proposed algorithms. The simulation results show that the proposed algorithm solves the shortcomings of traditional Retinex algorithms,accelerates the speed of fog image enhancement,and the fog image is more clearly, so it can obtain better visual effect.%在雾天条件下拍摄图像时,由于受到大气散射作用的影响,图像的颜色和对比度会出现退化现象.为了提高雾天图像的质量,提出一种改进的多尺度Retine雾天图像增强算法.首先采用幂次变换压缩图像动态范围；然后采用非线性变换对图像的高光区域进行抑制；最后采用反锐化掩模滤波消除图像模糊,增强雾天图像细节信息,并采用多幅雾天图像对算法性能进行仿真测试.仿真结果表明,改进多尺度Retine的雾天图像增强算法较好地解决了传统Retine算法存在的不足,加快了雾天图像增强的运行速度,使得雾天图像更加清晰化,获得了更优的视觉效果.
VOLTAGE REGULATORS ASYNCHRONOUS GENERATORS
Directory of Open Access Journals (Sweden)
Grigorash O. V.
2015-06-01
Full Text Available A promising is currently the use of asynchronous generators with capacitive excitation as a source of electricity in stand-alone power systems. Drive asynchronous generators may exercise as a thermal engine and wind wheel wind power plant or turbines of small hydropower plants. The article discusses the structural and schematics of voltage stabilizers and frequency of asynchronous generators with improved operational and technical specifications. Technical novelty of design solutions of the magnetic system and stabilizers asynchronous generator of electricity parameters confirmed by the patents for the invention of the Russian Federation. The proposed technical solution voltage stabilizer asynchronous generators, can reduce the weight of the block capacitors excitation and reactive power compensation, as well as to simplify the control system power circuit which has less power electronic devices. For wind power plants it is an important issue not only to stabilize the voltage of the generator, but also the frequency of the current. Recommend functionality stabilizer schemes parameters of electric power made for direct frequency converters with artificial and natural switching power electronic devices. It is also proposed as part of stabilization systems use single-phase voltage, three-phase transformers with rotating magnetic field, reduce the level of electromagnetic interference generated by power electronic devices for switching, enhance the efficiency and reliability of the stabilizer.
Geomagnetism and Induced Voltage
Abdul-Razzaq, W.; Biller, R. D.
2010-01-01
Introductory physics laboratories have seen an influx of "conceptual integrated science" over time in their classrooms with elements of other sciences such as chemistry, biology, Earth science, and astronomy. We describe a laboratory to introduce this development, as it attracts attention to the voltage induced in the human brain as it…
Energy Technology Data Exchange (ETDEWEB)
Bugl, Andrea; Ball, Markus; Boehmer, Michael; Doerheim, Sverre; Hoenle, Andreas; Konorov, Igor [Technische Universitaet Muenchen, Garching (Germany); Ketzer, Bernhard [Technische Universitaet Muenchen, Garching (Germany); Helmholtz-Institut fuer Strahlen- und Kernphysik, Bonn (Germany)
2014-07-01
Current measurements in the nano- and picoampere region on high voltage are an important tool to understand charge transfer processes in micropattern gas detectors like the Gas Electron Multiplier (GEM). They are currently used to e.g. optimize the field configuration in a multi-GEM stack to be used in the ALICE TPC after the upgrade of the experiment during the 2nd long shutdown of the LHC. Devices which allow measurements down to 1pA at high voltage up to 6 kV have been developed at TU Muenchen. They are based on analog current measurements via the voltage drop over a switchable shunt. A microcontroller collects 128 digital ADC values and calculates their mean and standard deviation. This information is sent with a wireless transmitting unit to a computer and stored in a root file. A nearly unlimited number of devices can be operated simultaneously and read out by a single receiver. The results can also be displayed on a LCD directly at the device. Battery operation and the wireless readout are important to protect the user from any contact to high voltage. The principle of the device is explained, and systematic studies of their properties are shown.
WAVELET BASED CLASSIFICATION OF VOLTAGE SAG, SWELL & TRANSIENTS
Directory of Open Access Journals (Sweden)
Vijay Gajanan Neve
2013-05-01
Full Text Available When the time localization of the spectral components is needed, the WAVELE TRANSFORM (WT can be used to obtain the optimal time frequency representation of the signal. This paper deals with the use of a wavelet transform to detect and analyze voltage sags, voltage swell and transients. It introduces voltage disturbance detection approach based on wavelet transform, identifies voltage disturbances, and discriminates the type of event which has resulted in the voltage disturbance, e.g. either a fault or a capacitor-switching incident.Feasibility of the proposed disturbance detection approach is demonstrated based on digital time-domain simulation of a distribution power system using the PSCAD software package, and is implemented using MATLAB. The developed algorithm has been applied to the 14-buses IEEE system to illustrate its application. Results are analyzed.
Electric Vehicle IM Controller Based on Voltage-Fed Inverter
Institute of Scientific and Technical Information of China (English)
宋建国; 张承宁; 袁学; 谭建
2004-01-01
A novel electric vehicle (EV) induction motor (IM) controller based on voltage-fed inverter is presented. It is shown that the proposed adaptive control algorithm effectively both simplifies the structure and expands the capacity of controller. The relationship between stator's voltage and that of current under rotor's flux-oriented-coordinates is first introduced, and then the structure of vector control is analyzed, in which voltage compensation is inducted as the core feedback procedure. Experiments prove that, together with a facility for realization, a smooth transition, a prompt torque response and small concussion are gained. Extensive research conducted by varying parameters that result in practical ripple is proposed in conclusion.
Institute of Scientific and Technical Information of China (English)
杨涛; 张生虎; 高洪明; 吴林; 许可望; 刘永贞
2013-01-01
Plasma-MIG复合电弧焊接对电源的外特性输出及焊接过程控制有着很高的要求,以VC++软件开发工具为平台,推导了适合于Plasma-MIG复合电弧焊接的增量型PID控制算法,实现了对复合电弧焊接过程控制及电源外特性的要求.结果表明,增量型PID恒流恒压控制能够满足Plasma-MIG对电源外特性的输出要求.Plasma电弧和MIG电弧并不是相互独立的,两者以共享的电磁空间、导电气氛和焊丝为媒介建立起耦合关系.Plasma-MIG复合电弧焊接过程中,增量型PID控制下的Plasma电弧能够自发的调节自身电参数,来稳定电弧空间的电流密度,使得焊接过程中无飞溅.采用控制后,Plasma-MIG复合电弧焊焊接过程焊缝铺展好,焊接过程稳定,焊缝成形好.%Output characteristics of the power supply and welding process control are important factors for plasma-MIG hybrid arc welding. PID increment control algorithm suitable for Plasma-MIG hybrid arc welding was developed based on VC + + language in this paper, which optimized the output characteristics of the power supply and welding process control. The results show that the plasma arc and MIG arc were coupled with each other by sharing the electro-magnetic space, gas and filler metal. Plasma are controlled by PID increment control algorithm was capable of stabilizing the current density through the arc due to its self-adjusting function, without sputtering in the welding process. High stability, molten metal with excellent liquidity and weld with smooth surface were realized by plasma-MIG hybrid arc welding with PID increment control algorithm.
De Siena, L.; Thomas, C.; Aster, R.
2014-05-01
The attenuation of body-wave amplitudes with propagation distance can be used to provide detailed tomographic images of seismic interfaces, fluid reservoirs, and melt batches in the crust. The high sensitivity of body-wave energies to high-scattering structures becomes an obstacle when we try to apply attenuation tomography to small-scale volcanic media, where we must take into account the complexities induced by strong heterogeneous scattering, topography, and uncertain source modeling in the recorded wave-fields. The MuRAT code uses a source- and site-independent coda-normalization method to obtain frequency-dependent measurements of P-to-coda and S-to-coda energy ratios. The code inverts these data for both the geometrical spreading factor and the spatially-dependent quality factors (Q), providing additional attenuation information in the regions where velocity tomography is available. The high sensitivity of coda-waves to highly heterogeneous structures highlights zones of anomalous scattering, which may corrupt amplitude-dependent attenuation measurements, and where basal assumptions of linear optics may go unfulfilled. A multi-step tomographic inversion increases the stability of the results obtained in regions of high heterogeneity (e.g., the volcanic edifice) by the inclusion of data corresponding to either sources or stations located in regions of lower heterogeneity. On the other hand, a mere increase in the number of rays entirely contained in the heterogeneous structures affects both the stability and the effective resolution of the results. We apply the code to two small waveform datasets recorded at an active (Mount St. Helens) and at a quiescent (Mount Vesuvius) volcano. The results show that the seismicity located inside or under the volcanic edifice produces an increase of the low-frequency energy ratios with travel time in both areas. In our interpretation, the anomalous concentration of energy which affects any waveform recorded on the cone
Institute of Scientific and Technical Information of China (English)
余汉华; 胡雁辉; 张晓津
2013-01-01
针对传统高频电流传感器(HFCT)或电容式耦合传感器对设备进行局部放电检测精度不高,采用人工鱼群算法对在线监测系统构架结构进行优化.通过Matlab与Labview的交互界面仿真验证,提高了局部放电检测系统的精度.%Aiming at the low precision of traditional high frequency current sensor or the capacitance coupled sensor partial discharge inspecting on the equipment,the artificial fish algorithm was used to optimize the online monitoring system structure.The interface stimulation verification between Matlab and Labview improved the precision of partial discharge detecting system.
Marine High-voltage Motor Winding Dielectric Loss Angle Correction Algorithm%海洋高压电机绕组中介质损耗角的校正算法
Institute of Scientific and Technical Information of China (English)
李阳勤; 刘敬彪; 蔡文郁; 霍洪强
2015-01-01
This paper describes a marine high-pressure motor winding dielectric loss angle correction algorithm and outlines China's marine expedition equipments and the necessity of preventing insulation accidents in entire power system .Also it introduces the dielectric loss angle calculation process based on the harmonic analysis , and analyzes the error caused by the harmonic algorithm .Then proposed a correction method combing window leak and fence effect of disclosure—the use of Hamming window weighting function and the fast Fourier transform correction method , which corrects the frequency , amplitude and phase angle respectively .The value of the dielectric loss get from improved harmonic analysis method by the frequency fluctuation is very small , measured in the field .%介绍了一种海洋高压电机绕组中介质损耗角的校正算法。详述了基于谐波分析法的介质损耗角的计算过程，通过分析谐波算法带来的误差原因，根据误差主要原因窗泄露以及栅栏效应提出了将二者合二为一的校正方法，使用汉明窗加权函数和快速傅里叶变换的校正方法，分别对频率、幅值和相位角进行校正改进，改进的谐波分析法得出的介质损耗值受频率变化波动非常小，在现场测量中得到了充分的验证。
求解大规模旅行商问题的改进大洪水算法%Modified Great Deluge Algorithm for Large-scale Travelling Salesman Problem
Institute of Scientific and Technical Information of China (English)
盛虹平; 马良
2012-01-01
Great deluge algorithm is one of heuristics by simulating the process of flood rising to search global optimization. r-Opt algorithm is usually applied in path optimization. For the travelling salesman problem, this paper gives a modified great deluge algorithm which mainly combines great deluge algorithm with r-opt algorithm, and it can be used to solve the large-scale and super large-scale travelling salesman problem. The algorithm is programmed in Delphi 7, and is tested through series of standard instances from TSPLIB. The errors between the algorithm results and the best results published in TSPLIB are almost below \\%. The algorithm is proved to be a kind of new method to solve the difficult large-scale travelling salesman problem.%大洪水算法是通过模拟洪水上涨过程来进行全局寻优的启发式算法,r-opt算法是一类常用的路径改进算法.本文针对旅行商问题,提出一种将二者有机融合的改进大洪水算法,可用于快速求解大规模和超大规模的TSP问题.算法在Delphi7环境下编程实现,经过大量TSPLIB中的数据实例进行测试和验证,求解结果与已公布的最好结果误差基本都在1％以下,为困难的大规模旅行商问题提供了新的求解手段.
Power flow analysis for DC voltage droop controlled DC microgrids
DEFF Research Database (Denmark)
Li, Chendan; Chaudhary, Sanjay; Dragicevic, Tomislav
2014-01-01
This paper proposes a new algorithm for power flow analysis in droop controlled DC microgrids. By considering the droop control in the power flow analysis for the DC microgrid, when compared with traditional methods, more accurate analysis results can be obtained. The algorithm verification...... is carried out by comparing the calculation results with detailed time domain simulation results. With the droop parameters as variables in the power flow analysis, their effects on power sharing and secondary voltage regulation can now be analytically studied, and specialized optimization in the upper level...... control can also be made accordingly. Case studies on power sharing and secondary voltage regulation are carried out using proposed power flow analysis....
Directory of Open Access Journals (Sweden)
Hiroshi Kikusato
2016-01-01
Full Text Available Many photovoltaic (PV systems have been installed in distribution systems. This installation complicates the maintenance of all voltages within the appropriate range in all low-voltage distribution systems (LVDSs because the trends in voltage fluctuation differ in each LVDS. The installation of a low-voltage regulator (LVR that can accordingly control the voltage in each LVDS has been studied as a solution to this problem. Voltage control in a medium-voltage distribution system must be considered to study the deployment of LVRs. In this study, we installed LVRs in the LVDSs in which the existing voltage-control scheme cannot prevent voltage deviation and performed a numerical simulation by using a distribution system model with PV to evaluate the deployment of the LVRs.
Analyzing of Dynamic Voltage Restorer in Series Compensation Voltage
Directory of Open Access Journals (Sweden)
Naser Parhizgar
2012-02-01
Full Text Available The Dynamic Voltage Restorer (DVR is a series-connected compensator to generate a controllable voltage to against the short-term voltage disturbances. The technique of DVR is an effective and cost competitive approach to improve voltage quality at the load side. This study presents a single-phase and threephase DVR system with reduced switch-count topology to protect the sensitive load against abnormal voltage conditions. Most basic function, the DVR configuration consist of a two level Voltage Source Converter (VSC, a dc energy storage device, a coupling transformer Connected in shunt with the ac system This study presents the application of Dynamic Voltage Restorer (DVR on power distribution systems for mitigation of voltage sag at critical loads. DVR is one of the compensating types of custom power devices. The DVR, which is based on forced-commutated Voltage Source Converter (VSC has been proved suitable for the task of compensating voltage sags/swells. Simulation results are presented to illustrate and understand the performances of DVR in supporting load voltages under voltage sags/swells conditions.
Emira, Ahmed A.
2014-10-09
Various embodiments of a high voltage charge pump are described. One embodiment is a charge pump circuit that comprises a plurality of switching stages each including a clock input, a clock input inverse, a clock output, and a clock output inverse. The circuit further comprises a plurality of pumping capacitors, wherein one or more pumping capacitors are coupled to a corresponding switching stage. The circuit also comprises a maximum selection circuit coupled to a last switching stage among the plurality of switching stages, the maximum selection circuit configured to filter noise on the output clock and the output clock inverse of the last switching stage, the maximum selection circuit further configured to generate a DC output voltage based on the output clock and the output clock inverse of the last switching stage.
Directory of Open Access Journals (Sweden)
Yang Lei
2015-05-01
Full Text Available This paper describes an automatic mosaicking algorithm for creating large-scale mosaic maps of forest height. In contrast to existing mosaicking approaches through using SAR backscatter power and/or InSAR phase, this paper utilizes the forest height estimates that are inverted from spaceborne repeat-pass cross-pol InSAR correlation magnitude. By using repeat-pass InSAR correlation measurements that are dominated by temporal decorrelation, it has been shown that a simplified inversion approach can be utilized to create a height-sensitive measure over the whole interferometric scene, where two scene-wide fitting parameters are able to characterize the mean behavior of the random motion and dielectric changes of the volume scatterers within the scene. In order to combine these single-scene results into a mosaic, a matrix formulation is used with nonlinear least squares and observations in adjacent-scene overlap areas to create a self-consistent estimate of forest height over the larger region. This automated mosaicking method has the benefit of suppressing the global fitting error and, thus, mitigating the “wallpapering” problem in the manual mosaicking process. The algorithm is validated over the U.S. state of Maine by using InSAR correlation magnitude data from ALOS/PALSAR and comparing the inverted forest height with Laser Vegetation Imaging Sensor (LVIS height and National Biomass and Carbon Dataset (NBCD basal area weighted (BAW height. This paper serves as a companion work to previously demonstrated results, the combination of which is meant to be an observational prototype for NASA’s DESDynI-R (now called NISAR and JAXA’s ALOS-2 satellite missions.
Increased voltage photovoltaic cell
Ross, B.; Bickler, D. B.; Gallagher, B. D. (Inventor)
1985-01-01
A photovoltaic cell, such as a solar cell, is provided which has a higher output voltage than prior cells. The improved cell includes a substrate of doped silicon, a first layer of silicon disposed on the substrate and having opposite doping, and a second layer of silicon carbide disposed on the first layer. The silicon carbide preferably has the same type of doping as the first layer.
High Voltage Seismic Generator
Bogacz, Adrian; Pala, Damian; Knafel, Marcin
2015-04-01
This contribution describes the preliminary result of annual cooperation of three student research groups from AGH UST in Krakow, Poland. The aim of this cooperation was to develop and construct a high voltage seismic wave generator. Constructed device uses a high-energy electrical discharge to generate seismic wave in ground. This type of device can be applied in several different methods of seismic measurement, but because of its limited power it is mainly dedicated for engineering geophysics. The source operates on a basic physical principles. The energy is stored in capacitor bank, which is charged by two stage low to high voltage converter. Stored energy is then released in very short time through high voltage thyristor in spark gap. The whole appliance is powered from li-ion battery and controlled by ATmega microcontroller. It is possible to construct larger and more powerful device. In this contribution the structure of device with technical specifications is resented. As a part of the investigation the prototype was built and series of experiments conducted. System parameter was measured, on this basis specification of elements for the final device were chosen. First stage of the project was successful. It was possible to efficiently generate seismic waves with constructed device. Then the field test was conducted. Spark gap wasplaced in shallowborehole(0.5 m) filled with salt water. Geophones were placed on the ground in straight line. The comparison of signal registered with hammer source and sparker source was made. The results of the test measurements are presented and discussed. Analysis of the collected data shows that characteristic of generated seismic signal is very promising, thus confirms possibility of practical application of the new high voltage generator. The biggest advantage of presented device after signal characteristics is its size which is 0.5 x 0.25 x 0.2 m and weight approximately 7 kg. This features with small li-ion battery makes
Institute of Scientific and Technical Information of China (English)
周红婷; 宋玮
2016-01-01
为了进一步分析大规模风电汇集地区电压稳定性，提出应考虑风电场动态无功控制的影响。基于电压-无功灵敏度法解释了动态无功补偿装置的恒无功控制方式所带来的汇集地区电压上升问题。利用小扰动稳定法，分析出采用高压侧恒电压控制的风电场内动态无功补偿装置之间存在很强的相互作用，并会引起不稳定的电压振荡。以华北某风电汇集地区为例，在PSS/E中比较分析区内所有风电场内动态无功补偿装置分别采用恒无功、高压侧恒电压和低压侧恒电压三种控制方式时受到小扰动后的电压变化。仿真结果验证了分析结论，表明在研究风电汇集地区电压稳定性问题上，考虑风电场的动态无功控制影响是必要的。%In order to analyze voltage stability in large-scale wind farms integration area, this paper proposes that dynamic reactive power control should be considered. Presently, dynamic reactive power compensation device in wind farms widely adopts constant reactive power control. According to it, this paper firstly proposes this control mode of dynamic reactive power compensation device can bring voltage stability problem based on voltage-reactive power sensitivity method;then it uses small signal stability analysis method, consulting that there is a strong interaction between dynamic reactive power compensation devices in wind farms which adopts high-voltage side constant voltage control. As a result, it will lead to unstable high frequency oscillation;finally, taking a wind farm integration area in North China, it obtains the voltage change of all dynamic reactive power compensation device in wind farm after small disturbance when constant control mode, high-voltage side constant voltage control, and low-voltage side constant voltage control are adopted respectively. The result shows that considering dynamic reactive power control is important for voltage stability
DEFF Research Database (Denmark)
Soleimani, Hamed; Kannan, Govindan
2015-01-01
-heuristic algorithms are considered to develop a new elevated hybrid algorithm: the genetic algorithm (GA) and particle swarm optimization (PSO). Analyzing the above-mentioned algorithms' strengths and weaknesses leads us to attempt to improve the GA using some aspects of PSO. Therefore, a new hybrid algorithm...... is proposed and a complete validation process is undertaken using CPLEX and MATLAB software. In small instances, the global optimum points of CPLEX for the proposed hybrid algorithm are compared to genetic algorithm, and particle swarm optimization. Then, in small, mid, and large-size instances, performances...... of the proposed meta-heuristics are analyzed and evaluated. Finally, a case study involving an Iranian hospital furniture manufacturer is used to evaluate the proposed solution approach. The results reveal the superiority of the proposed hybrid algorithm when compared to the GA and PSO....
Mitigation of Voltage Sags in CIGRE Low Voltage Distribution Network
DEFF Research Database (Denmark)
Mustafa, Ghullam; Bak-Jensen, Birgitte; Mahat, Pukar;
2013-01-01
problems in the distribution system. The voltage problems dealt with in this paper are to show how to mitigate voltage sags in the CIGRE Low Voltage (LV) test network and networks like this. The voltage sags, for the tested cases in the CIGRE LV test network are mainly due to three phase faults....... The compensation of voltage sags in the different parts of CIGRE distribution network is done by using the four STATCOM compensators already existing in the test grid. The simulations are carried out in DIgSILENT power factory software version 15.0....
Optimal planning of high voltage distribution substations
Institute of Scientific and Technical Information of China (English)
YU Yixin; YAN Xuefei; ZHANG Yongwu
2007-01-01
Aimed at solving the problem of optimal planning for high voltage distribution substations,an efficient method is put forward.The method divides the problem into two sub-problems:source locating and combinational optimization.The algorithm of allocating and locating alternatively (ALA) is widely used to deal with the source locating problem,but it is dependent on the initial location to a large degree.Thus,some modifications were made to the ALA algorithm,which could greatly improve the quality of solutions.In addition,considering the non-convex and nonconcave nature of the sub-problem of combinational optimization,the branch-and-bound technique was adopted to obtain or approximate a global optimal solution.To improve the efficiency of the branch-and-bound technique,some heuristic principles were proposed to cut those branches that may generate a global optimization solution with low probability.Examples show that the proposed algorithm meets the requirement of engineering and it is an effective approach to rapidly solve the problem of optimal planning for high voltage distribution substations.
Voltage Controlled Dynamic Demand Response
DEFF Research Database (Denmark)
Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Mahat, Pukar
2013-01-01
. An adaptive dynamic model has been developed to determine composite voltage dependency of an aggregated load on feeder level. Following the demand dispatch or control signal, optimum voltage setting at the LV substation is determined based on the voltage dependency of the load. Furthermore, a new technique...
Brainard, John P.; Christenson, Todd R.
2009-11-03
A charge-pump voltage converter for converting a low voltage provided by a low-voltage source to a higher voltage. Charge is inductively generated on a transfer rotor electrode during its transit past an inductor stator electrode and subsequently transferred by the rotating rotor to a collector stator electrode for storage or use. Repetition of the charge transfer process leads to a build-up of voltage on a charge-receiving device. Connection of multiple charge-pump voltage converters in series can generate higher voltages, and connection of multiple charge-pump voltage converters in parallel can generate higher currents. Microelectromechanical (MEMS) embodiments of this invention provide a small and compact high-voltage (several hundred V) voltage source starting with a few-V initial voltage source. The microscale size of many embodiments of this invention make it ideally suited for MEMS- and other micro-applications where integration of the voltage or charge source in a small package is highly desirable.
Transient voltage sharing in series-coupled high voltage switches
Directory of Open Access Journals (Sweden)
Editorial Office
1992-07-01
Full Text Available For switching voltages in excess of the maximum blocking voltage of a switching element (for example, thyristor, MOSFET or bipolar transistor such elements are often coupled in series - and additional circuitry has to be provided to ensure equal voltage sharing. Between each such series element and system ground there is a certain parasitic capacitance that may draw a significant current during high-speed voltage transients. The "open" switch is modelled as a ladder network. Analysis reveals an exponential progression in the distribution of the applied voltage across the elements. Overstressing thus occurs in some of the elements at levels of the total voltage that are significantly below the design value. This difficulty is overcome by grading the voltage sharing circuitry, coupled in parallel with each element, in a prescribed manner, as set out here.
Directory of Open Access Journals (Sweden)
Hadi Nazem-Bokaee
2015-09-01
Full Text Available The Total Membrane Influx constrained Flux Balance Analysis (ToMI-FBA algorithm was developed in this research as a new tool to help researchers decide which microbial host and medium formulation are optimal for expressing a new metabolic pathway. ToMI-FBA relies on genome-scale metabolic flux modeling and a novel in silico cell membrane influx constraint that specifies the flux of atoms (not molecules into the cell through all possible membrane transporters. The ToMI constraint is constructed through the addition of an extra row and column to the stoichiometric matrix of a genome-scale metabolic flux model. In this research, the mathematical formulation of the ToMI constraint is given along with four case studies that demonstrate its usefulness. In Case Study 1, ToMI-FBA returned an optimal culture medium formulation for the production of isobutanol from Bacillus subtilis. Significant levels of L-valine were recommended to optimize production, and this result has been observed experimentally. In Case Study 2, it is demonstrated how the carbon to nitrogen uptake ratio can be specified as an additional ToMI-FBA constraint. This was investigated for maximizing medium chain length polyhydroxyalkanoates (mcl-PHA production from Pseudomonas putida KT2440. In Case Study 3, ToMI-FBA revealed a strategy of adding cellobiose as a means to increase ethanol selectivity during the stationary growth phase of Clostridium acetobutylicum ATCC 824. This strategy was also validated experimentally. Finally, in Case Study 4, B. subtilis was identified as a superior host to Escherichia coli, Saccharomyces cerevisiae, and Synechocystis PCC6803 for the production of artemisinate.
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. The author examines the problem and constructs alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the FORTRAN portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers. 13 references.
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. In this paper we examine the problem and construct alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the Fortran portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers.
A comparative study for detection and measurement of voltage disturbance in online condition
Directory of Open Access Journals (Sweden)
Vijay Gajanan Neve
2015-05-01
Full Text Available Voltage disturbance is the most important power quality problem faced by many industrial customers. It includes voltage sag, swell, spikes and harmonics. Real time detection of these voltage disturbances posed various problems. This paper compares the various methods of detection of voltage sag and swells in real time on the basis of detection time, magnitude, effect of windowing and effect of sampling frequencies. The RMS, Peak, Fourier transform and Missing Voltage algorithm are introduced and discussed in them for real time implementation. Comparative analysis reveals that quantification of voltage sag and swell is possible using these measurements. The main focus is given on to these points and all the voltage sag and swell detection technique tested online with the help of advantech card data acquisition. The voltage sag and swell events are generated by using practical experimentation in laboratory.
Alaraj, Muhannad; Radenkovic, Miloje; Park, Jae-Do
2017-02-01
Microbial fuel cells (MFCs) are renewable and sustainable energy sources that can be used for various applications. The MFC output power depends on its biochemical conditions as well as the terminal operating points in terms of output voltage and current. There exists one operating point that gives the maximum possible power from the MFC, maximum power point (MPP), for a given operating condition. However, this MPP may vary and needs to be tracked in order to maintain the maximum power extraction from the MFC. Furthermore, MFC reactors often develop voltage overshoots that cause drastic drops in the terminal voltage, current, and the output power. When the voltage overshoot happens, an additional control measure is necessary as conventional MPPT algorithms will fail because of the change in the voltage-current relationship. In this paper, the extremum seeking (ES) algorithm was used to track the varying MPP and a voltage overshoot avoidance (VOA) algorithm is developed to manage the voltage overshoot conditions. The proposed ES-MPPT with VOA algorithm was able to extract 197.2 mJ during 10-min operation avoiding voltage overshoot, while the ES MPPT-only scheme stopped harvesting after only 18.75 mJ because of the voltage overshoot happened at 0.4 min.
Multi-bottleneck scheduling algorithm for large-scale Job Shop%大规模作业车间多瓶颈调度算法
Institute of Scientific and Technical Information of China (English)
翟颖妮; 孙树栋; 杨宏安; 牛刚刚; 袁宗寅
2011-01-01
To solve Large-Scale Job Shop Scheduling Problems(LSJSSP), a multi-bottleneck scheduling algorithm based on rolling horizon decomposition was proposed. This algorithm adopted critical path method to detect bottlenecks, and solved the LSJSSP by decomposing it into a series of sub-problems according to the process routines of the jobs. In the construction process of the sub-problems, the idea of load balanced distribution was proposed to distribute the load of each job in the sub-problems and to realize the stability of the solution process. In the solving process of the sub-problems. The bottleneck operations were scheduled by genetic algorithm, and the non-bottleneck operations were scheduled by dispatching rules according to the principle of "bottleneck machines lead non-bottleneck machines" in Theory of Constraints (TOC), the solving efficiency was improved. Through re-optimization process for the overlapping operations in the adjacent sub-problems and the strategy of evaluating the chromosome's fitness by the global solution, limitations of the decomposition and solving process were avoided, and the solution quality was improved. Simulation results showed that the proposed algorithm for LSJSSP was with satisfactory solution efficiency and quality.%针对大规模作业车间调度问题,提出一种基于滚动窗分解的多瓶颈调度算法.该算法基于关键路径法进行多瓶颈机器的识别,沿时域将大规模调度问题分解为多个子问题进行求解.在子问题创建过程中,提出负荷均衡分布的规则,使得各工件在各子问题中的负荷均匀分布,以实现算法求解过程的稳定性；在子问题的求解过程中,遵循约束理论中瓶颈机主导非瓶颈机的原则,采用瓶颈工序最优化调度、非瓶颈工序采用分派规则快速调度的调度策略,提高算法的求解效率；通过相邻子问题间的工序衔接再优化过程,以及全局解评价子问题染色体适应度值策略,有效避
Institute of Scientific and Technical Information of China (English)
金涛; 蔡超; 王俊; 阮玲
2013-01-01
The current waveforms of DC circuit breaker oscillation loop are analyzed by a new on-line parameter estimation method based on genetic algorithm (GA). The method for estimating the parameters of capacitance, inductance and resistance in DC circuit breaker oscillation circuit, can evaluate the performance of DC circuit breaker oscillation loop accurately. According to the on-line monitoring data and off-line measured data of Gezhouba converter station, the veracity and reliability of the new method are validated by comparison with the traditional MAT-LAB curve fitting. The field trials show that GA is better than the traditional method in the aspect of parameter calculation and condition assessment of HVDC circuit breaker.%利用遗传算法对高压直流断路器振荡回路电流波形进行分析,提出对直流断路器振荡回路容抗、感抗和阻尼电阻的值进行在线参数计算的方法,以实现对直流断路器振荡回路性能的正确评估.结合葛上直流输电工程中葛洲坝换流站现场在线监测数据和离线试验数据,通过与MATLAB中利用最小二乘法拟合的结果进行比较,验证了基于遗传算法的高压直流开关参数计算方法的准确性和可靠性.现场试验效果证明遗传算法在高压直流断路器参数计算与状态评估方面优于传统的方法,对实际参数的计算更为准确.
Voltage stability analysis in the new deregulated environment
Zhu, Tong
Nowadays, a significant portion of the power industry is under deregulation. Under this new circumstance, network security analysis is more critical and more difficult. One of the most important issues in network security analysis is voltage stability analysis. Due to the expected higher utilization of equipment induced by competition in a power market that covers bigger power systems, this issue is increasingly acute after deregulation. In this dissertation, some selected topics of voltage stability analysis are covered. In the first part, after a brief review of general concepts of continuation power flow (CPF), investigations on various matrix analysis techniques to improve the speed of CPF calculation for large systems are reported. Based on these improvements, a new CPF algorithm is proposed. This new method is then tested by an inter-area transaction in a large inter-connected power system. In the second part, the Arnoldi algorithm, the best method to find a few minimum singular values for a large sparse matrix, is introduced into the modal analysis for the first time. This new modal analysis is applied to the estimation of the point of voltage collapse and contingency evaluation in voltage security assessment. Simulations show that the new method is very efficient. In the third part, after transient voltage stability component models are investigated systematically, a novel system model for transient voltage stability analysis, which is a logical-algebraic-differential-difference equation (LADDE), is offered. As an example, TCSC (Thyristor controlled series capacitors) is addressed as a transient voltage stabilizing controller. After a TCSC transient voltage stability model is outlined, a new TCSC controller is proposed to enhance both fault related and load increasing related transient voltage stability. Its ability is proven by the simulation.
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
Directory of Open Access Journals (Sweden)
Franz Konstantin Fuss
2013-01-01
Full Text Available Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal’s time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.