WorldWideScience

Sample records for incremental sampling-based algorithms

  1. Performance Evaluation of Incremental K-means Clustering Algorithm

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  2. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    Jamour, Fuad Tarek; Skiadopoulos, Spiros; Kalnis, Panos

    2017-01-01

    : they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving

  3. Generation of Referring Expressions: Assessing the Incremental Algorithm

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  4. A parallel ILP algorithm that incorporates incremental batch learning

    Nuno Fonseca; Rui Camacho; Fernado Silva

    2003-01-01

    In this paper we tackle the problems of eciency and scala-bility faced by Inductive Logic Programming (ILP) systems. We proposethe use of parallelism to improve eciency and the use of an incrementalbatch learning to address the scalability problem. We describe a novelparallel algorithm that incorporates into ILP the method of incremen-tal batch learning. The theoretical complexity of the algorithm indicatesthat a linear speedup can be achieved.

  5. Phase retrieval via incremental truncated amplitude flow algorithm

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  6. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  7. Adaptive Incremental Genetic Algorithm for Task Scheduling in Cloud Environments

    Kairong Duan

    2018-05-01

    Full Text Available Cloud computing is a new commercial model that enables customers to acquire large amounts of virtual resources on demand. Resources including hardware and software can be delivered as services and measured by specific usage of storage, processing, bandwidth, etc. In Cloud computing, task scheduling is a process of mapping cloud tasks to Virtual Machines (VMs. When binding the tasks to VMs, the scheduling strategy has an important influence on the efficiency of datacenter and related energy consumption. Although many traditional scheduling algorithms have been applied in various platforms, they may not work efficiently due to the large number of user requests, the variety of computation resources and complexity of Cloud environment. In this paper, we tackle the task scheduling problem which aims to minimize makespan by Genetic Algorithm (GA. We propose an incremental GA which has adaptive probabilities of crossover and mutation. The mutation and crossover rates change according to generations and also vary between individuals. Large numbers of tasks are randomly generated to simulate various scales of task scheduling problem in Cloud environment. Based on the instance types of Amazon EC2, we implemented virtual machines with different computing capacity on CloudSim. We compared the performance of the adaptive incremental GA with that of Standard GA, Min-Min, Max-Min , Simulated Annealing and Artificial Bee Colony Algorithm in finding the optimal scheme. Experimental results show that the proposed algorithm can achieve feasible solutions which have acceptable makespan with less computation time.

  8. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Gan, Wensheng; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038

  9. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  10. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  11. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    Fidel, Adam

    2014-05-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  12. Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing

    Boardman, Beth Leigh [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-12

    The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT* when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The analysis

  13. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  14. A scalable method for parallelizing sampling-based motion planning algorithms

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  15. A scalable method for parallelizing sampling-based motion planning algorithms

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  16. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  17. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  18. Is It that Difficult to Find a Good Preference Order for the Incremental Algorithm?

    Krahmer, Emiel; Koolen, Ruud; Theune, Mariet

    2012-01-01

    In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…

  19. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-01-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N 2 ) multiplications in contrast to O(N 2 ) and O(N 3 ) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested

  20. Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means

    Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.

    2018-04-01

    This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.

  1. Multiobjective Memetic Estimation of Distribution Algorithm Based on an Incremental Tournament Local Searcher

    Kaifeng Yang

    2014-01-01

    Full Text Available A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  2. Multiobjective memetic estimation of distribution algorithm based on an incremental tournament local searcher.

    Yang, Kaifeng; Mu, Li; Yang, Dongdong; Zou, Feng; Wang, Lei; Jiang, Qiaoyong

    2014-01-01

    A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  3. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  4. PCIU: Hardware Implementations of an Efficient Packet Classification Algorithm with an Incremental Update Capability

    O. Ahmed

    2011-01-01

    Full Text Available Packet classification plays a crucial role for a number of network services such as policy-based routing, firewalls, and traffic billing, to name a few. However, classification can be a bottleneck in the above-mentioned applications if not implemented properly and efficiently. In this paper, we propose PCIU, a novel classification algorithm, which improves upon previously published work. PCIU provides lower preprocessing time, lower memory consumption, ease of incremental rule update, and reasonable classification time compared to state-of-the-art algorithms. The proposed algorithm was evaluated and compared to RFC and HiCut using several benchmarks. Results obtained indicate that PCIU outperforms these algorithms in terms of speed, memory usage, incremental update capability, and preprocessing time. The algorithm, furthermore, was improved and made more accessible for a variety of applications through implementation in hardware. Two such implementations are detailed and discussed in this paper. The results indicate that a hardware/software codesign approach results in a slower, but easier to optimize and improve within time constraints, PCIU solution. A hardware accelerator based on an ESL approach using Handel-C, on the other hand, resulted in a 31x speed-up over a pure software implementation running on a state of the art Xeon processor.

  5. A Novel Classification Algorithm Based on Incremental Semi-Supervised Support Vector Machine.

    Fei Gao

    Full Text Available For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a "soft-start" approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment.

  6. Modeling of Photovoltaic System with Modified Incremental Conductance Algorithm for Fast Changes of Irradiance

    Saad Motahhir

    2018-01-01

    Full Text Available The first objective of this work is to determine some of the performance parameters characterizing the behavior of a particular photovoltaic (PV panels that are not normally provided in the manufacturers’ specifications. These provide the basis for developing a simple model for the electrical behavior of the PV panel. Next, using this model, the effects of varying solar irradiation, temperature, series and shunt resistances, and partial shading on the output of the PV panel are presented. In addition, the PV panel model is used to configure a large photovoltaic array. Next, a boost converter for the PV panel is designed. This converter is put between the panel and the load in order to control it by means of a maximum power point tracking (MPPT controller. The MPPT used is based on incremental conductance (INC, and it is demonstrated here that this technique does not respond accurately when solar irradiation is increased. To investigate this, a modified incremental conductance technique is presented in this paper. It is shown that this system does respond accurately and reduces the steady-state oscillations when solar irradiation is increased. Finally, simulations of the conventional and modified algorithm are compared, and the results show that the modified algorithm provides an accurate response to a sudden increase in solar irradiation.

  7. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  8. USING THE POPULATION-BASED INCREMENTAL LEARNING ALGORITHM WITH COMPUTER SIMULATION: SOME APPLICATIONS

    J. Bekker

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The integration of the population-based incremental learning (PBIL algorithm with computer simulation shows how this particular combination can be applied to find good solutions to combinatorial optimisation problems. Two illustrative examples are used: the classical inventory problem of finding a reorder point and reorder quantity that minimises costs while achieving a required service level (a stochastic problem; and the signal timing of a complex traffic intersection. Any traffic control system must be designed to minimise the duration of interruptions at intersections while maximising traffic throughput. The duration of the phases of traffic lights is of primary importance in this regard.

    AFRIKAANSE OPSOMMING: Die integrasie van die population-based incremental learning (PBIL algoritme met rekenaarsimulasie word bespreek, en daar word getoon hoe hierdie spesifieke kombinasie aangewend kan word om goeie oplossings vir kombinatoriese optimeringsprobleme te vind. Twee voorbeelde dien as illustrasie: die klassieke voorraadprobleem waarin ’n herbestelvlak en herbestelhoeveelheid bepaal moet word om koste te minimeer maar nogtans ’n vasgestelde diensvlak te handhaaf (’n stochastiese probleem; en die bepaling van die seintye van ’n komplekse verkeerskruising. Enige verkeerbeheerstelsel moet ontwerp word om die duur van die vloeionderbrekings by verkeerskruisings te minimeer en verkeerdeurset te maksimeer. Die tydsduur van die fases van verkeersligte is dus baie belangrik.

  9. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  10. The island model for parallel implementation of evolutionary algorithm of Population-Based Incremental Learning (PBIL) optimization

    Lima, Alan M.M. de; Schirru, Roberto

    2000-01-01

    Genetic algorithms are biologically motivated adaptive systems which have been used, with good results, for function optimization. The purpose of this work is to introduce a new parallelization method to be applied to the Population-Based Incremental Learning (PBIL) algorithm. PBIL combines standard genetic algorithm mechanisms with simple competitive learning and has ben successfully used in combinatorial optimization problems. The development of this algorithm aims its application to the reload optimization of PWR nuclear reactors. Tests have been performed with combinatorial optimization problems similar to the reload problem. Results are compared to the serial PBIL ones, showing the new method's superiority and its viability as a tool for the nuclear core reload problem solution. (author)

  11. Development of the Nonstationary Incremental Analysis Update Algorithm for Sequential Data Assimilation System

    Yoo-Geun Ham

    2016-01-01

    Full Text Available This study introduces a modified version of the incremental analysis updates (IAU, called the nonstationary IAU (NIAU method, to improve the assimilation accuracy of the IAU while keeping the continuity of the analysis. Similar to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. However, unlike the IAU, the NIAU procedure uses time-evolved forcing using the forward operator as corrections to the model. The solution of the NIAU is superior to that of the forward IAU, of which analysis is performed at the beginning of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.

  12. TV-constrained incremental algorithms for low-intensity CT image reconstruction

    Rose, Sean D.; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    constraint can be guided by an image reconstructed by filtered backprojection (FBP). We apply our algorithm to low-dose synchrotron X-ray CT data from the Advanced Photon Source (APS) at Argonne National Labs (ANL) to demonstrate its potential utility. We find that the algorithm provides a means of edge-preserving...

  13. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  14. Algoritmo incremental de agrupamiento con traslape para el procesamiento de grandes colecciones de datos (Overlapping clustering incremental algorithm for large data collections processing

    Lázaro Janier González-Soler

    2015-12-01

    Full Text Available Spanish abstract Existen diversos problemas en el Reconocimiento de Patrones y en la Minería de Datos que, por su naturaleza, consideran que los objetos pueden pertenecer a más de una clase o grupo. DClustR es un algoritmo dinámico de agrupamiento con traslape que ha mostrado, en tareas de agrupamiento de documentos, el mejor balance entre calidad de los grupos y eficiencia entre los algoritmos dinámicos de agrupamiento con traslape reportados en la literatura. A pesar de obtener buenos resultados, DClustR puede ser poco útil en aplicaciones que trabajen con grandes colecciones de documentos, debido a que tiene una complejidad computacional y a la cantidad de memoria que utiliza para el procesamiento de las colecciones. En este trabajo se presenta una versión paralela basada en GPU del algoritmo DClustR, llamada CUDA-DClus, para mejorar la eficiencia de DClustR en aplicaciones que lidien con largas colecciones de documentos. Los experimentos fueron realizados sobre varias colecciones estándares de documentos y en ellos se muestra el buen rendimiento de CUDA-DClus en términos de eficiencia y consumo de memoria. English abstract There are several problems in Pattern Recognition and Data Mining that, by its inherent nature, consider that the objects can belong to more than a class or cluster. DClustR is a dynamic overlapping clustering algorithm that has shown, in document clustering tasks, the best trade-off between cluster’s quality and efficiency among existing dynamic overlapping clustering algorithms. However, DClustR could be less useful when working in applications that deal with large data collections, due to its computational complexity and memory demanded for processing them. In this paper, a GPU-based parallel algorithm of DClustR, named CUDA-DClus is suggested to enhance DClustR efficiency in applications dealing with large data collections. The experimental phase conducted over various standard data collections showed that

  15. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  16. Optimization of the p-xylene oxidation process by a multi-objective differential evolution algorithm with adaptive parameters co-derived with the population-based incremental learning algorithm

    Guo, Zhan; Yan, Xuefeng

    2018-04-01

    Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.

  17. Incremental cost-effectiveness of algorithm-driven genetic testing versus no testing for Maturity Onset Diabetes of the Young (MODY) in Singapore.

    Nguyen, Hai Van; Finkelstein, Eric Andrew; Mital, Shweta; Gardner, Daphne Su-Lyn

    2017-11-01

    Offering genetic testing for Maturity Onset Diabetes of the Young (MODY) to all young patients with type 2 diabetes has been shown to be not cost-effective. This study tests whether a novel algorithm-driven genetic testing strategy for MODY is incrementally cost-effective relative to the setting of no testing. A decision tree was constructed to estimate the costs and effectiveness of the algorithm-driven MODY testing strategy and a strategy of no genetic testing over a 30-year time horizon from a payer's perspective. The algorithm uses glutamic acid decarboxylase (GAD) antibody testing (negative antibodies), age of onset of diabetes (30 years) to stratify the population of patients with diabetes into three subgroups, and testing for MODY only among the subgroup most likely to have the mutation. Singapore-specific costs and prevalence of MODY obtained from local studies and utility values sourced from the literature are used to populate the model. The algorithm-driven MODY testing strategy has an incremental cost-effectiveness ratio of US$93 663 per quality-adjusted life year relative to the no testing strategy. If the price of genetic testing falls from US$1050 to US$530 (a 50% decrease), it will become cost-effective. Our proposed algorithm-driven testing strategy for MODY is not yet cost-effective based on established benchmarks. However, as genetic testing prices continue to fall, this strategy is likely to become cost-effective in the near future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Performance analysis of ‘Perturb and Observe’ and ‘Incremental Conductance’ MPPT algorithms for PV system

    Lodhi, Ehtisham; Lodhi, Zeeshan; Noman Shafqat, Rana; Chen, Fieda

    2017-07-01

    Photovoltaic (PV) system usually employed The Maximum power point tracking (MPPT) techniques for increasing its efficiency. The performance of the PV system perhaps boosts by controlling at its apex point of power, in this way maximal power can be given to load. The proficiency of a PV system usually depends upon irradiance, temperature and array architecture. PV array shows a non-linear style for V-I curve and maximal power point on V-P curve also varies with changing environmental conditions. MPPT methods grantees that a PV module is regulated at reference voltage and to produce entire usage of the maximal output power. This paper gives analysis between two widely employed Perturb and Observe (P&O) and Incremental Conductance (INC) MPPT techniques. Their performance is evaluated and compared through theoretical analysis and digital simulation on the basis of response time and efficiency under varying irradiance and temperature condition using Matlab/Simulink.

  19. Incremental Yield of Including Determine-TB LAM Assay in Diagnostic Algorithms for Hospitalized and Ambulatory HIV-Positive Patients in Kenya.

    Huerga, Helena; Ferlazzo, Gabriella; Bevilacqua, Paolo; Kirubi, Beatrice; Ardizzoni, Elisa; Wanjala, Stephen; Sitienei, Joseph; Bonnet, Maryline

    2017-01-01

    Determine-TB LAM assay is a urine point-of-care test useful for TB diagnosis in HIV-positive patients. We assessed the incremental diagnostic yield of adding LAM to algorithms based on clinical signs, sputum smear-microscopy, chest X-ray and Xpert MTB/RIF in HIV-positive patients with symptoms of pulmonary TB (PTB). Prospective observational cohort of ambulatory (either severely ill or CD4<200cells/μl or with Body Mass Index<17Kg/m2) and hospitalized symptomatic HIV-positive adults in Kenya. Incremental diagnostic yield of adding LAM was the difference in the proportion of confirmed TB patients (positive Xpert or MTB culture) diagnosed by the algorithm with LAM compared to the algorithm without LAM. The multivariable mortality model was adjusted for age, sex, clinical severity, BMI, CD4, ART initiation, LAM result and TB confirmation. Among 474 patients included, 44.1% were severely ill, 69.6% had CD4<200cells/μl, 59.9% had initiated ART, 23.2% could not produce sputum. LAM, smear-microscopy, Xpert and culture in sputum were positive in 39.0% (185/474), 21.6% (76/352), 29.1% (102/350) and 39.7% (92/232) of the patients tested, respectively. Of 156 patients with confirmed TB, 65.4% were LAM positive. Of those classified as non-TB, 84.0% were LAM negative. Adding LAM increased the diagnostic yield of the algorithms by 36.6%, from 47.4% (95%CI:39.4-55.6) to 84.0% (95%CI:77.3-89.4%), when using clinical signs and X-ray; by 19.9%, from 62.2% (95%CI:54.1-69.8) to 82.1% (95%CI:75.1-87.7), when using clinical signs and microscopy; and by 13.4%, from 74.4% (95%CI:66.8-81.0) to 87.8% (95%CI:81.6-92.5), when using clinical signs and Xpert. LAM positive patients had an increased risk of 2-months mortality (aOR:2.7; 95%CI:1.5-4.9). LAM should be included in TB diagnostic algorithms in parallel to microscopy or Xpert request for HIV-positive patients either ambulatory (severely ill or CD4<200cells/μl) or hospitalized. LAM allows same day treatment initiation in patients at

  20. Design of a temperature control system using incremental PID algorithm for a special homemade shortwave infrared spatial remote sensor based on FPGA

    Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting

    2010-11-01

    An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.

  1. Incremental Trust in Grid Computing

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  2. Simulation and comparison of perturb and observe and incremental ...

    Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. ... Keywords. Solar array; insolation; MPPT; modelling, P & O; incremental conductance. 1. .... voltage level. It is also ..... Int. J. Advances in Eng. Technol. 133–148.

  3. FDTD Stability: Critical Time Increment

    Z. Skvor; L. Pauk

    2003-01-01

    A new approach suitable for determination of the maximal stable time increment for the Finite-Difference Time-Domain (FDTD) algorithm in common curvilinear coordinates, for general mesh shapes and certain types of boundaries is presented. The maximal time increment corresponds to a characteristic value of a Helmholz equation that is solved by a finite-difference (FD) method. If this method uses exactly the same discretization as the given FDTD method (same mesh, boundary conditions, order of ...

  4. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  5. Algorithms

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  6. Algorithms

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  7. Algorithms

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  8. Planning Through Incrementalism

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  9. Deep Incremental Boosting

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  10. Incremental Centrality Algorithms for Dynamic Network Analysis

    2013-08-01

    literature.   7.1.3 Small World Networks In 1998, Watts and Strogatz introduced a model that starts with a regular lattice (ring) of n nodes and...and S. Strogatz , "Collective Dynamics of ‘Small-World’ Networks," Nature, vol. 393, pp. 440-442, 1998. [13] T. Opsahl, "Structure and Evolution of...34On Random Graphs," Publicationes Mathematicae, vol. 6, 1959. [167] D.J. Watts and S.H. Strogatz , "Collective Dynamics of ‘Small-World’ Networks

  11. Teraflop-scale Incremental Machine Learning

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  12. Algorithms

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  13. Algorithms

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  14. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  15. Algorithms

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  16. FDTD Stability: Critical Time Increment

    Z. Skvor

    2003-06-01

    Full Text Available A new approach suitable for determination of the maximal stable timeincrement for the Finite-Difference Time-Domain (FDTD algorithm incommon curvilinear coordinates, for general mesh shapes and certaintypes of boundaries is presented. The maximal time incrementcorresponds to a characteristic value of a Helmholz equation that issolved by a finite-difference (FD method. If this method uses exactlythe same discretization as the given FDTD method (same mesh, boundaryconditions, order of precision etc., the maximal stable time incrementis obtained from the highest characteristic value. The FD system issolved by an iterative method, which uses only slightly alteredoriginal FDTD formulae. The Courant condition yields a stable timeincrement, but in certain cases the maximum increment is slightlygreater [2].

  17. Incremental cryptography and security of public hash functions ...

    An investigation of incremental algorithms for crytographic functions was initiated. The problem, for collision-free hashing, is to design a scheme for which there exists an efficient “update” algorithm: this algorithm is given the hash function H, the hash h = H(M) of message M and the “replacement request” (j, m), and outputs ...

  18. Quantum independent increment processes

    Franz, Uwe

    2005-01-01

    This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.

  19. Quantum independent increment processes

    Franz, Uwe

    2006-01-01

    This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.

  20. Efficient incremental relaying

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  1. Sample-Based Extreme Learning Machine with Missing Data

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  2. Average-case analysis of incremental topological ordering

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  3. Design of methodology for incremental compiler construction

    Pavel Haluza

    2011-01-01

    Full Text Available The paper deals with possibilities of the incremental compiler construction. It represents the compiler construction possibilities for languages with a fixed set of lexical units and for languages with a variable set of lexical units, too. The methodology design for the incremental compiler construction is based on the known algorithms for standard compiler construction and derived for both groups of languages. Under the group of languages with a fixed set of lexical units there belong languages, where each lexical unit has its constant meaning, e.g., common programming languages. For this group of languages the paper tries to solve the problem of the incremental semantic analysis, which is based on incremental parsing. In the group of languages with a variable set of lexical units (e.g., professional typographic system TEX, it is possible to change arbitrarily the meaning of each character on the input file at any time during processing. The change takes effect immediately and its validity can be somehow limited or is given by the end of the input. For this group of languages this paper tries to solve the problem case when we use macros temporarily changing the category of arbitrary characters.

  4. Legislative Bargaining and Incremental Budgeting

    Dhammika Dharmapala

    2002-01-01

    The notion of 'incrementalism', formulated by Aaron Wildavsky in the 1960's, has been extremely influential in the public budgeting literature. In essence, it entails the claim that legislators engaged in budgetary policymaking accept past allocations, and decide only on the allocation of increments to revenue. Wildavsky explained incrementalism with reference to the cognitive limitations of lawmakers and their desire to reduce conflict. This paper uses a legislative bargaining framework to u...

  5. Efficient Incremental Data Analysis

    Nikolic, Milos

    2016-01-01

    Many data-intensive applications require real-time analytics over streaming data. In a growing number of domains -- sensor network monitoring, social web applications, clickstream analysis, high-frequency algorithmic trading, and fraud detections to name a few -- applications continuously monitor stream events to promptly react to certain data conditions. These applications demand responsive analytics even when faced with high volume and velocity of incoming changes, large numbers of users, a...

  6. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  7. Incremental Learning for Place Recognition in Dynamic Environments

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara; Jensfelt, Patric

    2007-01-01

    This paper proposes a discriminative approach to template-based Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which ...

  8. Enabling Incremental Query Re-Optimization.

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  9. Incremental learning for automated knowledge capture

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  10. FEM Simulation of Incremental Shear

    Rosochowski, Andrzej; Olejnik, Lech

    2007-01-01

    A popular way of producing ultrafine grained metals on a laboratory scale is severe plastic deformation. This paper introduces a new severe plastic deformation process of incremental shear. A finite element method simulation is carried out for various tool geometries and process kinematics. It has been established that for the successful realisation of the process the inner radius of the channel as well as the feeding increment should be approximately 30% of the billet thickness. The angle at which the reciprocating die works the material can be 30 deg. . When compared to equal channel angular pressing, incremental shear shows basic similarities in the mode of material flow and a few technological advantages which make it an attractive alternative to the known severe plastic deformation processes. The most promising characteristic of incremental shear is the possibility of processing very long billets in a continuous way which makes the process more industrially relevant

  11. Incremental principal component pursuit for video background modeling

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  12. Incremental Visualizer for Visible Objects

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  13. Support vector machine incremental learning triggered by wrongly predicted samples

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  14. Convergent systems vs. incremental stability

    Rüffer, B.S.; Wouw, van de N.; Mueller, M.

    2013-01-01

    Two similar stability notions are considered; one is the long established notion of convergent systems, the other is the younger notion of incremental stability. Both notions require that any two solutions of a system converge to each other. Yet these stability concepts are different, in the sense

  15. Decoupled Simulation Method For Incremental Sheet Metal Forming

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  16. Using machine learning to accelerate sampling-based inversion

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  17. ON SAMPLING BASED METHODS FOR THE DUBINS TRAVELING SALESMAN PROBLEM WITH NEIGHBORHOODS

    Petr Váňa

    2015-12-01

    Full Text Available In this paper, we address the problem of path planning to visit a set of regions by Dubins vehicle, which is also known as the Dubins Traveling Salesman Problem Neighborhoods (DTSPN. We propose a modification of the existing sampling-based approach to determine increasing number of samples per goal region and thus improve the solution quality if a more computational time is available. The proposed modification of the sampling-based algorithm has been compared with performance of existing approaches for the DTSPN and results of the quality of the found solutions and the required computational time are presented in the paper.

  18. Unmanned Maritime Systems Incremental Acquisition Approach

    2016-12-01

    REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH 5. FUNDING...Approved for public release. Distribution is unlimited. UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH Thomas Driscoll, Lieutenant...UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH ABSTRACT The purpose of this MBA report is to explore and understand the issues

  19. Incremental deformation: A literature review

    Nasulea Daniel

    2017-01-01

    Full Text Available Nowadays the customer requirements are in permanent changing and according with them the tendencies in the modern industry is to implement flexible manufacturing processes. In the last decades, metal forming gained attention of the researchers and considerable changes has occurred. Because for a small number of parts, the conventional metal forming processes are expensive and time-consuming in terms of designing and manufacturing preparation, the manufacturers and researchers became interested in flexible processes. One of the most investigated flexible processes in metal forming is incremental sheet forming (ISF. ISF is an advanced flexible manufacturing process which allows to manufacture complex 3D products without expensive dedicated tools. In most of the cases it is needed for an ISF process the following: a simple tool, a fixing device for sheet metal blank and a universal CNC machine. Using this process it can be manufactured axis-symmetric parts, usually using a CNC lathe but also complex asymmetrical parts using CNC milling machines, robots or dedicated equipment. This paper aim to present the current status of incremental sheet forming technologies in terms of process parameters and their influences, wall thickness distribution, springback effect, formability, surface quality and the current main research directions.

  20. Incremental learning of concept drift in nonstationary environments.

    Elwell, Ryan; Polikar, Robi

    2011-10-01

    We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE

  1. Incremental Observer Relative Data Extraction

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave...... or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  2. The Toggle Local Planner for sampling-based motion planning

    Denny, Jory; Amato, Nancy M.

    2012-01-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5

  3. A sampling-based approach to probabilistic pursuit evasion

    Mahadevan, Aditya; Amato, Nancy M.

    2012-01-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented

  4. Incremental support vector machines for fast reliable image recognition

    Makili, L.; Vega, J.; Dormido-Canto, S.

    2013-01-01

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency

  5. Incremental support vector machines for fast reliable image recognition

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  6. Whole arm manipulation planning based on feedback velocity fields and sampling-based techniques.

    Talaei, B; Abdollahi, F; Talebi, H A; Omidi Karkani, E

    2013-09-01

    Changing the configuration of a cooperative whole arm manipulator is not easy while enclosing an object. This difficulty is mainly because of risk of jamming caused by kinematic constraints. To reduce this risk, this paper proposes a feedback manipulation planning algorithm that takes grasp kinematics into account. The idea is based on a vector field that imposes perturbation in object motion inducing directions when the movement is considerably along manipulator redundant directions. Obstacle avoidance problem is then considered by combining the algorithm with sampling-based techniques. As experimental results confirm, the proposed algorithm is effective in avoiding jamming as well as obstacles for a 6-DOF dual arm whole arm manipulator. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  7. On excursion increments in heartbeat dynamics

    Guzmán-Vargas, L.; Reyes-Ramírez, I.; Hernández-Pérez, R.

    2013-01-01

    We study correlation properties of excursion increments of heartbeat time series from healthy subjects and heart failure patients. We construct the excursion time based on the original heartbeat time series, representing the time employed by the walker to return to the local mean value. Next, the detrended fluctuation analysis and the fractal dimension method are applied to the magnitude and sign of the increments in the time excursions between successive excursions for the mentioned groups. Our results show that for magnitude series of excursion increments both groups display long-range correlations with similar correlation exponents, indicating that large (small) increments (decrements) are more likely to be followed by large (small) increments (decrements). For sign sequences and for both groups, we find that increments are short-range anti-correlated, which is noticeable under heart failure conditions

  8. Increment memory module for spectrometric data recording

    Zhuchkov, A.A.; Myagkikh, A.I.

    1988-01-01

    Incremental memory unit designed to input differential energy spectra of nuclear radiation is described. ROM application as incremental device has allowed to reduce the number of elements and do simplify information readout from the unit. 12-bit 2048 channels present memory unit organization. The device is connected directly with the bus of microprocessor systems similar to KR 580. Incrementation maximal time constitutes 3 mks. It is possible to use this unit in multichannel counting mode

  9. Small Diameter Bomb Increment II (SDB II)

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense... Bomb Increment II (SDB II) DoD Component Air Force Joint Participants Department of the Navy Responsible Office References SAR Baseline (Production...Mission and Description Small Diameter Bomb Increment II (SDB II) is a joint interest United States Air Force (USAF) and Department of the Navy

  10. Incremental Query Rewriting with Resolution

    Riazanov, Alexandre; Aragão, Marcelo A. T.

    We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.

  11. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  12. A sampling-based approach to probabilistic pursuit evasion

    Mahadevan, Aditya

    2012-05-01

    Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.

  13. An incremental anomaly detection model for virtual machines

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  14. An incremental anomaly detection model for virtual machines.

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  15. Variable screening and ranking using sampling-based sensitivity measures

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  16. Sampling Based Trajectory Planning for Robots in Dynamic Human Environments

    Svenstrup, Mikael

    2010-01-01

    Open-ended human environments, such as pedestrian streets, hospital corridors, train stations etc., are places where robots start to emerge. Hence, being able to plan safe and natural trajectories in these dynamic environments is an important skill for future generations of robots. In this work...... the problem is formulated as planning a minimal cost trajectory through a potential field, defined from the perceived position and motion of persons in the environment. A modified Rapidlyexploring Random Tree (RRT) algorithm is proposed as a solution to the planning problem. The algorithm implements a new...... for the uncertainty in the dynamic environment. The planning algorithm is demonstrated in a simulated pedestrian street environment....

  17. Efficient Incremental Checkpointing of Java Programs

    Lawall, Julia Laetitia; Muller, Gilles

    2000-01-01

    This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...

  18. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  19. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  20. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  1. Storage capacity of the Tilinglike Learning Algorithm

    Buhot, Arnaud; Gordon, Mirta B.

    2001-01-01

    The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered

  2. On the Perturb-and-Observe and Incremental Conductance MPPT methods for PV systems

    Sera, Dezso; Mathe, Laszlo; Kerekes, Tamas

    2013-01-01

    This paper presents a detailed analysis of the two most well-known hill-climbing MPPT algorithms, the Perturb-and-Observe (P&O) and Incremental Conductance (INC). The purpose of the analysis is to clarify some common misconceptions in the literature regarding these two trackers, therefore helping...

  3. Combining Compact Representation and Incremental Generation in Large Games with Sequential Strategies

    Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind

    2015-01-01

    representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...

  4. Growth increments in teeth of Diictodon (Therapsida

    J. Francis Thackeray

    1991-09-01

    Full Text Available Growth increments circa 0.02 mm in width have been observed in sectioned tusks of Diictodon from the Late Permian lower Beaufort succession of the South African Karoo, dated between about 260 and 245 million years ago. Mean growth increments show a decline from relatively high values in the Tropidostoma/Endothiodon Assemblage Zone, to lower values in the Aulacephalodon/Cistecephaluszone, declining still further in the Dicynodon lacerficeps/Whaitsia zone at the end of the Permian. These changes coincide with gradual changes in carbon isotope ratios measured from Diictodon tooth apatite. It is suggested that the decline in growth increments is related to environmental changes associated with a decline in primary production which contributed to the decline in abundance and ultimate extinction of Diictodon.

  5. Theory of Single Point Incremental Forming

    Martins, P.A.F.; Bay, Niels; Skjødt, Martin

    2008-01-01

    This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in-plan......-plane contact friction and is focused on the extreme modes of deformation that are likely to be found in single point incremental forming processes. The overall investigation is supported by experimental work performed by the authors and data retrieved from the literature.......This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in...

  6. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  7. Automatic incrementalization of Prolog based static analyses

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  8. Incremental Integrity Checking: Limitations and Possibilities

    Christiansen, Henning; Martinenghi, Davide

    2005-01-01

    Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...... to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...

  9. History Matters: Incremental Ontology Reasoning Using Modules

    Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny

    The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.

  10. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  11. Existing School Buildings: Incremental Seismic Retrofit Opportunities.

    Federal Emergency Management Agency, Washington, DC.

    The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…

  12. Automatic incrementalization of Prolog based static analyses

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  13. The Cognitive Underpinnings of Incremental Rehearsal

    Varma, Sashank; Schleisman, Katrina B.

    2014-01-01

    Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…

  14. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  15. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  16. Optimal Output of Distributed Generation Based On Complex Power Increment

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  17. Chemometric classification of casework arson samples based on gasoline content.

    Sinkov, Nikolai A; Sandercock, P Mark L; Harynuk, James J

    2014-02-01

    Detection and identification of ignitable liquids (ILs) in arson debris is a critical part of arson investigations. The challenge of this task is due to the complex and unpredictable chemical nature of arson debris, which also contains pyrolysis products from the fire. ILs, most commonly gasoline, are complex chemical mixtures containing hundreds of compounds that will be consumed or otherwise weathered by the fire to varying extents depending on factors such as temperature, air flow, the surface on which IL was placed, etc. While methods such as ASTM E-1618 are effective, data interpretation can be a costly bottleneck in the analytical process for some laboratories. In this study, we address this issue through the application of chemometric tools. Prior to the application of chemometric tools such as PLS-DA and SIMCA, issues of chromatographic alignment and variable selection need to be addressed. Here we use an alignment strategy based on a ladder consisting of perdeuterated n-alkanes. Variable selection and model optimization was automated using a hybrid backward elimination (BE) and forward selection (FS) approach guided by the cluster resolution (CR) metric. In this work, we demonstrate the automated construction, optimization, and application of chemometric tools to casework arson data. The resulting PLS-DA and SIMCA classification models, trained with 165 training set samples, have provided classification of 55 validation set samples based on gasoline content with 100% specificity and sensitivity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. The Toggle Local Planner for sampling-based motion planning

    Denny, Jory

    2012-05-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5. An important primitive of these methods is the local planner, which is used for validation of simple paths between two configurations. The most common is the straight-line local planner which interpolates along the straight line between the two configurations. In this paper, we introduce a new local planner, Toggle Local Planner (Toggle LP), which extends local planning to a two-dimensional subspace of the overall planning space. If no path exists between the two configurations in the subspace, then Toggle LP is guaranteed to correctly return false. Intuitively, more connections could be found by Toggle LP than by the straight-line planner, resulting in better connected roadmaps. As shown in our results, this is the case, and additionally, the extra cost, in terms of time or storage, for Toggle LP is minimal. Additionally, our experimental analysis of the planner shows the benefit for a wide array of robots, with DOF as high as 70. © 2012 IEEE.

  19. The more you learn, the less you store : Memory-controlled incremental SVM for visual place recognition

    Pronobis, Andrzej; Jie, Luo; Caputo, Barbara

    2010-01-01

    The capability to learn from experience is a key property for autonomous cognitive systems working in realistic settings. To this end, this paper presents an SVM-based algorithm, capable of learning model representations incrementally while keeping under control memory requirements. We combine an incremental extension of SVMs [43] with a method reducing the number of support vectors needed to build the decision function without any loss in performance [15] introducing a parameter which permit...

  20. Statistics of wind direction and its increments

    Doorn, Eric van; Dhruva, Brindesh; Sreenivasan, Katepalli R.; Cassella, Victor

    2000-01-01

    We study some elementary statistics of wind direction fluctuations in the atmosphere for a wide range of time scales (10 -4 sec to 1 h), and in both vertical and horizontal planes. In the plane parallel to the ground surface, the direction time series consists of two parts: a constant drift due to large weather systems moving with the mean wind speed, and fluctuations about this drift. The statistics of the direction fluctuations show a rough similarity to Brownian motion but depend, in detail, on the wind speed. This dependence manifests itself quite clearly in the statistics of wind-direction increments over various intervals of time. These increments are intermittent during periods of low wind speeds but Gaussian-like during periods of high wind speeds. (c) 2000 American Institute of Physics

  1. Evolving effective incremental SAT solvers with GP

    Bader, Mohamed; Poli, R.

    2008-01-01

    Hyper-Heuristics could simply be defined as heuristics to choose other heuristics, and it is a way of combining existing heuristics to generate new ones. In a Hyper-Heuristic framework, the framework is used for evolving effective incremental (Inc*) solvers for SAT. We test the evolved heuristics (IncHH) against other known local search heuristics on a variety of benchmark SAT problems.

  2. Shakedown analysis by finite element incremental procedures

    Borkowski, A.; Kleiber, M.

    1979-01-01

    It is a common occurence in many practical problems that external loads are variable and the exact time-dependent history of loading is unknown. Instead of it load is characterized by a given loading domain: a convex polyhedron in the n-dimensional space of load parameters. The problem is then to check whether a structure shakes down, i.e. responds elastically after a few elasto-plastic cycles, or not to a variable loading as defined above. Such check can be performed by an incremental procedure. One should reproduce incrementally a simple cyclic process which consists of proportional load paths that connect the origin of the load space with the corners of the loading domain. It was proved that if a structure shakes down to such loading history then it is able to adopt itself to an arbitrary load path contained in the loading domain. The main advantage of such approach is the possibility to use existing incremental finite-element computer codes. (orig.)

  3. Automated Dimension Determination for NMF-based Incremental Collaborative Filtering

    Xiwei Wang

    2015-12-01

    Full Text Available The nonnegative matrix factorization (NMF based collaborative filtering t e chniques h a ve a c hieved great success in product recommendations. It is well known that in NMF, the dimensions of the factor matrices have to be determined in advance. Moreover, data is growing fast; thus in some cases, the dimensions need to be changed to reduce the approximation error. The recommender systems should be capable of updating new data in a timely manner without sacrificing the prediction accuracy. In this paper, we propose an NMF based data update approach with automated dimension determination for collaborative filtering purposes. The approach can determine the dimensions of the factor matrices and update them automatically. It exploits the nearest neighborhood based clustering algorithm to cluster users and items according to their auxiliary information, and uses the clusters as the constraints in NMF. The dimensions of the factor matrices are associated with the cluster quantities. When new data becomes available, the incremental clustering algorithm determines whether to increase the number of clusters or merge the existing clusters. Experiments on three different datasets (MovieLens, Sushi, and LibimSeTi were conducted to examine the proposed approach. The results show that our approach can update the data quickly and provide encouraging prediction accuracy.

  4. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance

    Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600

  5. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.

    Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.

  6. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  7. 48 CFR 3432.771 - Provision for incremental funding.

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Provision for incremental funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION..., Incremental Funding, in a solicitation if a cost-reimbursement contract using incremental funding is...

  8. Two models of minimalist, incremental syntactic analysis.

    Stabler, Edward P

    2013-07-01

    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.

  9. Incremental Nonnegative Matrix Factorization for Face Recognition

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  10. [Incremental cost effectiveness of multifocal cataract surgery].

    Pagel, N; Dick, H B; Krummenauer, F

    2007-02-01

    Supplementation of cataract patients with multifocal intraocular lenses involves an additional financial investment when compared to the corresponding monofocal supplementation, which usually is not funded by German health care insurers. In the context of recent resource allocation discussions, however, the cost effectiveness of multifocal cataract surgery could become an important rationale. Therefore an evidence-based estimation of its cost effectiveness was carried out. Three independent meta-analyses were implemented to estimate the gain in uncorrected near visual acuity and best corrected visual acuity (vision lines) as well as the predictability (fraction of patients without need for reading aids) of multifocal supplementation. Study reports published between 1995 and 2004 (English or German language) were screened for appropriate key words. Meta effects in visual gain and predictability were estimated by means and standard deviations of the reported effect measures. Cost data were estimated by German DRG rates and individual lens costs; the cost effectiveness of multifocal cataract surgery was then computed in terms of its marginal cost effectiveness ratio (MCER) for each clinical benefit endpoint; the incremental costs of multifocal versus monofocal cataract surgery were further estimated by means of their respective incremental cost effectiveness ratio (ICER). An independent meta-analysis estimated the complication profiles to be expected after monofocal and multifocal cataract surgery in order to evaluate expectable complication-associated additional costs of both procedures; the marginal and incremental cost effectiveness estimates were adjusted accordingly. A sensitivity analysis comprised cost variations of +/- 10 % and utility variations alongside the meta effect estimate's 95 % confidence intervals. Total direct costs from the health care insurer's perspective were estimated 3363 euro, associated with a visual meta benefit in best corrected visual

  11. Incremental Adaptive Fuzzy Control for Sensorless Stroke Control of A Halbach-type Linear Oscillatory Motor

    Lei, Meizhen; Wang, Liqiang

    2018-01-01

    The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.

  12. Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter.

    Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang

    2017-01-14

    In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the "Velocity and Attitude" matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.

  13. Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter

    Hairong Chu

    2017-01-01

    Full Text Available In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF algorithm is proposed. First, the model of SINS transfer alignment is defined based on the “Velocity and Attitude” matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.

  14. Development of a Sampling-Based Global Sensitivity Analysis Workflow for Multiscale Computational Cancer Models

    Wang, Zhihui; Deisboeck, Thomas S.; Cristini, Vittorio

    2014-01-01

    There are two challenges that researchers face when performing global sensitivity analysis (GSA) on multiscale in silico cancer models. The first is increased computational intensity, since a multiscale cancer model generally takes longer to run than does a scale-specific model. The second problem is the lack of a best GSA method that fits all types of models, which implies that multiple methods and their sequence need to be taken into account. In this article, we therefore propose a sampling-based GSA workflow consisting of three phases – pre-analysis, analysis, and post-analysis – by integrating Monte Carlo and resampling methods with the repeated use of analysis of variance (ANOVA); we then exemplify this workflow using a two-dimensional multiscale lung cancer model. By accounting for all parameter rankings produced by multiple GSA methods, a summarized ranking is created at the end of the workflow based on the weighted mean of the rankings for each input parameter. For the cancer model investigated here, this analysis reveals that ERK, a downstream molecule of the EGFR signaling pathway, has the most important impact on regulating both the tumor volume and expansion rate in the algorithm used. PMID:25257020

  15. A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications

    Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco

    2015-01-01

    This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.

  16. A design of LED adaptive dimming lighting system based on incremental PID controller

    He, Xiangyan; Xiao, Zexin; He, Shaojia

    2010-11-01

    As a new generation energy-saving lighting source, LED is applied widely in various technology and industry fields. The requirement of its adaptive lighting technology is more and more rigorous, especially in the automatic on-line detecting system. In this paper, a closed loop feedback LED adaptive dimming lighting system based on incremental PID controller is designed, which consists of MEGA16 chip as a Micro-controller Unit (MCU), the ambient light sensor BH1750 chip with Inter-Integrated Circuit (I2C), and constant-current driving circuit. A given value of light intensity required for the on-line detecting environment need to be saved to the register of MCU. The optical intensity, detected by BH1750 chip in real time, is converted to digital signal by AD converter of the BH1750 chip, and then transmitted to MEGA16 chip through I2C serial bus. Since the variation law of light intensity in the on-line detecting environment is usually not easy to be established, incremental Proportional-Integral-Differential (PID) algorithm is applied in this system. Control variable obtained by the incremental PID determines duty cycle of Pulse-Width Modulation (PWM). Consequently, LED's forward current is adjusted by PWM, and the luminous intensity of the detection environment is stabilized by self-adaptation. The coefficients of incremental PID are obtained respectively after experiments. Compared with the traditional LED dimming system, it has advantages of anti-interference, simple construction, fast response, and high stability by the use of incremental PID algorithm and BH1750 chip with I2C serial bus. Therefore, it is suitable for the adaptive on-line detecting applications.

  17. Algorithming the Algorithm

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  18. Seismic noise attenuation using an online subspace tracking algorithm

    Zhou, Yatong; Li, Shuhua; Zhang, D.; Chen, Yangkang

    2018-01-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient

  19. Numerical Simulation of Incremental Sheet Forming by Simplified Approach

    Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.

    2011-01-01

    The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.

  20. Software designs of image processing tasks with incremental refinement of computation.

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  1. Incrementally Detecting Change Types of Spatial Area Object: A Hierarchical Matching Method Considering Change Process

    Yanhui Wang

    2018-01-01

    Full Text Available Detecting and extracting the change types of spatial area objects can track area objects’ spatiotemporal change pattern and provide the change backtracking mechanism for incrementally updating spatial datasets. To respond to the problems of high complexity of detection methods, high redundancy rate of detection factors, and the low automation degree during incrementally update process, we take into account the change process of area objects in an integrated way and propose a hierarchical matching method to detect the nine types of changes of area objects, while minimizing the complexity of the algorithm and the redundancy rate of detection factors. We illustrate in details the identification, extraction, and database entry of change types, and how we achieve a close connection and organic coupling of incremental information extraction and object type-of-change detection so as to characterize the whole change process. The experimental results show that this method can successfully detect incremental information about area objects in practical applications, with the overall accuracy reaching above 90%, which is much higher than the existing weighted matching method, making it quite feasible and applicable. It helps establish the corresponding relation between new-version and old-version objects, and facilitate the linked update processing and quality control of spatial data.

  2. Tracking and recognition face in videos with incremental local sparse representation model

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  3. Incremental fold tests of remagnetized carbonate rocks

    Van Der Voo, R.; van der Pluijm, B.

    2017-12-01

    Many unmetamorphosed carbonates all over the world are demonstrably remagnetized, with the age of the secondary magnetizations typically close to that of the nearest orogeny in space and time. This observation did not become compelling until the mid-1980's, when the incremental fold test revealed the Appalachian carbonates to carry a syn-deformational remanence of likely Permian age (Scotese et al., 1982, Phys. Earth Planet. Int., v. 30, p. 385-395; Cederquist et al., 2006, Tectonophysics v. 422, p. 41-54). Since that time scores of Appalachian and Rocky Mountain carbonate rocks have added results to the growing database of paleopoles representing remagnetizations. Late Paleozoic remagnetizations form a cloud of results surrounding the reference poles of the Laurentian APWP. Remagnetizations in other locales and with inferred ages coeval with regional orogenies (e.g., Taconic, Sevier/Laramide, Variscan, Indosinian) are also ubiquitous. To be able to transform this cornucopia into valuable anchor-points on the APWP would be highly desirable. This may indeed become feasible, as will be explained next. Recent studies of faulted and folded carbonate-shale sequences have shown that this deformation enhances the illitization of smectite (Haines & van der Pluijm, 2008, Jour. Struct. Geol., v. 30, p. 525-538; Fitz-Diaz et al., 2014, International Geol. Review, v. 56, p. 734-755). 39Ar-40Ar dating of the authigenic illite (neutralizing any detrital illite contribution by taking the intercept of a mixing line) yields, therefore, the age of the deformation. We know that this date is also the age of the syndeformational remanence; thus we have the age of the corresponding paleopole. Results so far are obtained for the Canadian and U.S. Rocky Mountains and for the Spanish Cantabrian carbonates (Tohver et al., 2008, Earth Planet. Sci. Lett., v. 274, p. 524-530) and make good sense in accord with geological knowledge. Incremental fold tests are the tools used for this

  4. Incremental View Maintenance for Deductive Graph Databases Using Generalized Discrimination Networks

    Thomas Beyhl

    2016-12-01

    Full Text Available Nowadays, graph databases are employed when relationships between entities are in the scope of database queries to avoid performance-critical join operations of relational databases. Graph queries are used to query and modify graphs stored in graph databases. Graph queries employ graph pattern matching that is NP-complete for subgraph isomorphism. Graph database views can be employed that keep ready answers in terms of precalculated graph pattern matches for often stated and complex graph queries to increase query performance. However, such graph database views must be kept consistent with the graphs stored in the graph database. In this paper, we describe how to use incremental graph pattern matching as technique for maintaining graph database views. We present an incremental maintenance algorithm for graph database views, which works for imperatively and declaratively specified graph queries. The evaluation shows that our maintenance algorithm scales when the number of nodes and edges stored in the graph database increases. Furthermore, our evaluation shows that our approach can outperform existing approaches for the incremental maintenance of graph query results.

  5. Incremental passivity and output regulation for switched nonlinear systems

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  6. Towards a multiconfigurational method of increments

    Fertitta, E.; Koch, D.; Paulus, B.; Barcza, G.; Legeza, Ö.

    2018-06-01

    The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localised and used for the many-body expansion. In situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot be guaranteed to yield proper results either. Herein, we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such an approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density matrix renormalisation group benchmarks. We tested this two-state constant-coupling MoI on beryllium rings of different sizes and studied the error introduced by the constant coupling.

  7. Natural Gas pipelines: economics of incremental capacity

    Kimber, M.

    2000-01-01

    A number of gas transmission pipeline systems in Australia exhibit capacity constraints, and yet there is little evidence of creative or innovative processes from either the service provides of the regulators which might provide a market-based response to these constraints. There is no provision in the Code in its current form to allow it to accommodate these processes. This aspect is one of many that require review to make the Code work. It is unlikely that the current members of the National Gas Pipeline Advisory Committee (NGPAC) or its advisers have sufficient understanding of the analysis of risk and the consequential commercial drivers to implement the necessary changes. As a result, the Code will increasingly lose touch with the commercial realities of the energy market and will continue to inhibit investment in new and expanded infrastructure where market risk is present. The recent report prepared for the Business Council of Australia indicates a need to re-vitalise the energy reform process. It is important for the Australian energy industry to provide leadership and advice to governments to continue the process of reform, and, in particular, to amend the Code to make it more relevant. These amendments must include a mechanism by which price signals can be generated to provide timely and effective information for existing service providers or new entrants to install incremental pipeline capacity

  8. Evolution of cooperation driven by incremental learning

    Li, Pei; Duan, Haibin

    2015-02-01

    It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.

  9. Power variation for Gaussian processes with stationary increments

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  10. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  11. Sound algorithms

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  12. Genetic algorithms

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  13. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  14. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Ming Xue

    2014-02-01

    Full Text Available To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks.

  15. Improved incremental conductance method for maximum power point tracking using cuk converter

    M. Saad Saoud

    2014-03-01

    Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.

  16. Chinese handwriting recognition an algorithmic perspective

    Su, Tonghua

    2013-01-01

    This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...

  17. One Step at a Time: SBM as an Incremental Process.

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  18. Defense Agencies Initiative Increment 2 (DAI Inc 2)

    2016-03-01

    module. In an ADM dated September 23, 2013, the MDA established Increment 2 as a MAIS program to include budget formulation; grants financial...2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then

  19. Incrementality in naming and reading complex numerals: Evidence from eyetracking

    Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.

    2006-01-01

    Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported

  20. Lifetime costs of lung transplantation : Estimation of incremental costs

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung

  1. Finance for incremental housing: current status and prospects for expansion

    Ferguson, B.; Smets, P.G.S.M.

    2010-01-01

    Appropriate finance can greatly increase the speed and lower the cost of incremental housing - the process used by much of the low/moderate-income majority of most developing countries to acquire shelter. Informal finance continues to dominate the funding of incremental housing. However, new sources

  2. Validation of the periodicity of growth increment deposition in ...

    Validation of the periodicity of growth increment deposition in otoliths from the larval and early juvenile stages of two cyprinids from the Orange–Vaal river ... Linear regression models were fitted to the known age post-fertilisation and the age estimated using increment counts to test the correspondence between the two for ...

  3. 76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction

    2011-11-29

    ... Benefits Business Transformation, Increment I, 76 FR 53764 (Aug. 29, 2011). The final rule removed form... [CIS No. 2481-09; Docket No. USCIS-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final...

  4. 76 FR 53763 - Immigration Benefits Business Transformation, Increment I

    2011-08-29

    ..., 100, et al. Immigration Benefits Business Transformation, Increment I; Final Rule #0;#0;Federal... Benefits Business Transformation, Increment I AGENCY: U.S. Citizenship and Immigration Services, DHS... USCIS is engaged in an enterprise-wide transformation effort to implement new business processes and to...

  5. The Time Course of Incremental Word Processing during Chinese Reading

    Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus

    2018-01-01

    In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…

  6. Creating Helical Tool Paths for Single Point Incremental Forming

    Skjødt, Martin; Hancock, Michael H.; Bay, Niels

    2007-01-01

    Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...

  7. On conditional scalar increment and joint velocity-scalar increment statistics

    Zhang Hengbin; Wang Danhong; Tong Chenning

    2004-01-01

    Conditional velocity and scalar increment statistics are usually studied in the context of Kolmogorov's refined similarity hypotheses and are considered universal (quasi-Gaussian) for inertial-range separations. In such analyses the locally averaged energy and scalar dissipation rates are used as conditioning variables. Recent studies have shown that certain local turbulence structures can be captured when the local scalar variance (φ 2 ) r and the local kinetic energy k r are used as the conditioning variables. We study the conditional increments using these conditioning variables, which also provide the local turbulence scales. Experimental data obtained in the fully developed region of an axisymmetric turbulent jet are used to compute the statistics. The conditional scalar increment probability density function (PDF) conditional on (φ 2 ) r is found to be close to Gaussian for (φ 2 ) r small compared with its mean and is sub-Gaussian and bimodal for large (φ 2 ) r , and therefore is not universal. We find that the different shapes of the conditional PDFs are related to the instantaneous degree of non-equilibrium (production larger than dissipation) of the local scalar. There is further evidence of this from the conditional PDF conditional on both (φ 2 ) r and χ r , which is largely a function of (φ 2 ) r /χ r , a measure of the degree of non-equilibrium. The velocity-scalar increment joint PDF is close to joint Gaussian and quad-modal for equilibrium and non-equilibrium local velocity and scalar, respectively. The latter shape is associated with a combination of the ramp-cliff and plane strain structures. Kolmogorov's refined similarity hypotheses also predict a dependence of the conditional PDF on the degree of non-equilibrium. Therefore, the quasi-Gaussian (joint) PDF, previously observed in the context of Kolmogorov's refined similarity hypotheses, is only one of the conditional PDF shapes of inertial range turbulence. The present study suggests that

  8. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  9. Algorithmic cryptanalysis

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  10. An Incremental Type-2 Meta-Cognitive Extreme Learning Machine.

    Pratama, Mahardhika; Zhang, Guangquan; Er, Meng Joo; Anavatti, Sreenatha

    2017-02-01

    Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.

  11. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  12. Statistical Discriminability Estimation for Pattern Classification Based on Neural Incremental Attribute Learning

    Wang, Ting; Guan, Sheng-Uei; Puthusserypady, Sadasivan

    2014-01-01

    Feature ordering is a significant data preprocessing method in Incremental Attribute Learning (IAL), a novel machine learning approach which gradually trains features according to a given order. Previous research has shown that, similar to feature selection, feature ordering is also important based...... estimation. Moreover, a criterion that summarizes all the produced values of AD is employed with a GA (Genetic Algorithm)-based approach to obtain the optimum feature ordering for classification problems based on neural networks by means of IAL. Compared with the feature ordering obtained by other approaches...

  13. Algorithmic mathematics

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  14. Total algorithms

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  15. Pornographic image recognition and filtering using incremental learning in compressed domain

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  16. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Zhu, T.

    2015-01-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ_e_f_f sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  17. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Zhu, T.

    2015-07-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ{sub eff} sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  18. Adaptive Functional-Based Neuro-Fuzzy-PID Incremental Controller Structure

    Ashraf Ahmed Fahmy

    2014-03-01

    Full Text Available This paper presents an adaptive functional-based Neuro-fuzzy-PID incremental (NFPID controller structure that can be tuned either offline or online according to required controller performance. First, differential membership functions are used to represent the fuzzy membership functions of the input-output space of the three term controller. Second, controller rules are generated based on the discrete proportional, derivative, and integral function for the fuzzy space. Finally, a fully differentiable fuzzy neural network is constructed to represent the developed controller for either offline or online controller parameter adaptation.  Two different adaptation methods are used for controller tuning, offline method based on controller transient performance cost function optimization using Bees Algorithm, and online method based on tracking error minimization using back-propagation with momentum algorithm. The proposed control system was tested to show the validity of the controller structure over a fixed PID controller gains to control SCARA type robot arm.

  19. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    Abdelhamid, Ehab; Canim, Mustafa; Sadoghi, Mohammad; Bhatta, Bishwaranjan; Chang, Yuan-Chi; Kalnis, Panos

    2017-01-01

    , such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem

  20. Increment and mortality in a virgin Douglas-fir forest.

    Robert W. Steele; Norman P. Worthington

    1955-01-01

    Is there any basis to the forester's rule of thumb that virgin forests eventually reach an equilibrium where increment and mortality approximately balance? Are we wasting potential timber volume by failing to salvage mortality in old-growth stands?

  1. Mission Planning System Increment 5 (MPS Inc 5)

    2016-03-01

    2016 Major Automated Information System Annual Report Mission Planning System Increment 5 (MPS Inc 5) Defense Acquisition Management Information...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then Year...Phone: 845-9625 DSN Fax: Date Assigned: May 19, 2014 Program Information Program Name Mission Planning System Increment 5 (MPS Inc 5) DoD

  2. Shredder: GPU-Accelerated Incremental Storage and Computation

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  3. On the instability increments of a stationary pinch

    Bud'ko, A.B.

    1989-01-01

    The stability of stationary pinch to helical modes is numerically studied. It is shown that in the case of a rather fast plasma pressure decrease to the pinch boundary, for example, for an isothermal diffusion pinch with Gauss density distribution instabilities with m=0 modes are the most quickly growing. Instability increments are calculated. A simple analytical expression of a maximum increment of growth of sausage instability for automodel Gauss profiles is obtained

  4. Biometrics Enabling Capability Increment 1 (BEC Inc 1)

    2016-03-01

    modal biometrics submissions to include iris, face, palm and finger prints from biometrics collection devices, which will support the Warfighter in...2016 Major Automated Information System Annual Report Biometrics Enabling Capability Increment 1 (BEC Inc 1) Defense Acquisition Management...Phone: 227-3119 DSN Fax: Date Assigned: July 15, 2015 Program Information Program Name Biometrics Enabling Capability Increment 1 (BEC Inc 1) DoD

  5. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists) and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution err...

  6. A Syntactic-Semantic Approach to Incremental Verification

    Bianculli, Domenico; Filieri, Antonio; Ghezzi, Carlo; Mandrioli, Dino

    2013-01-01

    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework...

  7. Logistics Modernization Program Increment 2 (LMP Inc 2)

    2016-03-01

    Sections 3 and 4 of the LMP Increment 2 Business Case, ADM), key functional requirements, Critical Design Review (CDR) Reports, and Economic ...from the 2013 version of the LMP Increment 2 Economic Analysis and replace it with references to the Economic Analysis that will be completed...of ( inbound /outbound) IDOCs into the system. LMP must be able to successfully process 95% of ( inbound /outbound) IDOCs into the system. Will meet

  8. Organization Strategy and Structural Differences for Radical Versus Incremental Innovation

    John E. Ettlie; William P. Bridges; Robert D. O'Keefe

    1984-01-01

    The purpose of this study was to test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation. That is, unique strategy and structure will be required for radical innovation, especially process adoption, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption. This differentiated theory is strongly supported ...

  9. MRI: Modular reasoning about interference in incremental programming

    Oliveira, Bruno C. D. S; Schrijvers, Tom; Cook, William R

    2012-01-01

    Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include: Object-oriented programming (OOP) inheritance, aspect-oriented programming (AOP) advice and feature-oriented programming (FOP). A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sh...

  10. Incremental short daily home hemodialysis: a case series

    Toth-Manikowski, Stephanie M.; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-01-01

    Background Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients? residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. Case presentation From 2011 to 2015, we initiated 5 incident hemodialysis patients on an ...

  11. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.

  12. Entity versus incremental theories predict older adults' memory performance.

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Incremental short daily home hemodialysis: a case series.

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  14. Maximum photovoltaic power tracking for the PV array using the fractional-order incremental conductance method

    Lin, Chia-Hung; Huang, Cong-Hui; Du, Yi-Chun; Chen, Jian-Liung

    2011-01-01

    Highlights: → The FOICM can shorten the tracking time less than traditional methods. → The proposed method can work under lower solar radiation including thin and heavy clouds. → The FOICM algorithm can achieve MPPT for radiation and temperature changes. → It is easy to implement in a single-chip microcontroller or embedded system. -- Abstract: This paper proposes maximum photovoltaic power tracking (MPPT) for the photovoltaic (PV) array using the fractional-order incremental conductance method (FOICM). Since the PV array has low conversion efficiency, and the output power of PV array depends on the operation environments, such as various solar radiation, environment temperature, and weather conditions. Maximum charging power can be increased to a battery using a MPPT algorithm. The energy conversion of the absorbed solar light and cell temperature is directly transferred to the semiconductor, but electricity conduction has anomalous diffusion phenomena in inhomogeneous material. FOICM can provide a dynamic mathematical model to describe non-linear characteristics. The fractional-order incremental change as dynamic variable is used to adjust the PV array voltage toward the maximum power point. For a small-scale PV conversion system, the proposed method is validated by simulation with different operation environments. Compared with traditional methods, experimental results demonstrate the short tracking time and the practicality in MPPT of PV array.

  15. SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.

    Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui

    2013-04-01

    Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.

  16. An approach to robot SLAM based on incremental appearance learning with omnidirectional vision

    Wu, Hua; Qin, Shi-Yin

    2011-03-01

    Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.

  17. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  18. Shape measurement system for single point incremental forming (SPIF) manufacts by using trinocular vision and random pattern

    Setti, Francesco; Bini, Ruggero; Lunardelli, Massimo; Bosetti, Paolo; Bruschi, Stefania; De Cecco, Mariolino

    2012-01-01

    Many contemporary works show the interest of the scientific community in measuring the shape of artefacts made by single point incremental forming. In this paper, we will present an algorithm able to detect feature points with a random pattern, check the compatibility of associations exploiting multi-stereo constraints and reject outliers and perform a 3D reconstruction by dense random patterns. The algorithm is suitable for a real-time application, in fact it needs just three images and a synchronous relatively fast processing. The proposed method has been tested on a simple geometry and results have been compared with a coordinate measurement machine acquisition. (paper)

  19. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  20. Algorithms For Integrating Nonlinear Differential Equations

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  1. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  2. Three routes forward for biofuels: Incremental, leapfrog, and transitional

    Morrison, Geoff M.; Witcover, Julie; Parker, Nathan C.; Fulton, Lew

    2016-01-01

    This paper examines three technology routes for lowering the carbon intensity of biofuels: (1) a leapfrog route that focuses on major technological breakthroughs in lignocellulosic pathways at new, stand-alone biorefineries; (2) an incremental route in which improvements are made to existing U.S. corn ethanol and soybean biodiesel biorefineries; and (3) a transitional route in which biotechnology firms gain experience growing, handling, or chemically converting lignocellulosic biomass in a lower-risk fashion than leapfrog biorefineries by leveraging existing capital stock. We find the incremental route is likely to involve the largest production volumes and greenhouse gas benefits until at least the mid-2020s, but transitional and leapfrog biofuels together have far greater long-term potential. We estimate that the Renewable Fuel Standard, California's Low Carbon Fuel Standard, and federal tax credits provided an incentive of roughly $1.5–2.5 per gallon of leapfrog biofuel between 2012 and 2015, but that regulatory elements in these policies mostly incentivize lower-risk incremental investments. Adjustments in policy may be necessary to bring a greater focus on transitional technologies that provide targeted learning and cost reduction opportunities for leapfrog biofuels. - Highlights: • Three technological pathways are compared that lower carbon intensity of biofuels. • Incremental changes lead to faster greenhouse gas reductions. • Leapfrog changes lead to greatest long-term potential. • Two main biofuel policies (RFS and LCFS) are largely incremental in nature. • Transitional biofuels offer medium-risk, medium reward pathway.

  3. Research on digital PID control algorithm for HPCT

    Zeng Yi; Li Rui; Shen Tianjian; Ke Xinhua

    2009-01-01

    Digital PID applied in high-precision HPCT (High-precision current transducer) based on Digital Signal Processor (DSP) TMS320F2812 and special D/A converter was researched. By using increment style PID Control algorithm, the stability and precision of high-precision HPCT output voltage is improved. On basis of deeply analysing incremental digital PID, the scheme model of HPCT is proposed, the feasibility simulation using Matlab is given. Practical hardware circuit verified the incremental PID has closed-loop control process in tracking HPCT output voltage. (authors)

  4. Making context explicit for explanation and incremental knowledge acquisition

    Brezillon, P. [Univ. Paris (France)

    1996-12-31

    Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.

  5. Martingales, nonstationary increments, and the efficient market hypothesis

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  6. Single point incremental forming: Formability of PC sheets

    Formisano, A.; Boccarusso, L.; Carrino, L.; Lambiase, F.; Minutolo, F. Memola Capece

    2018-05-01

    Recent research on Single Point Incremental Forming of polymers has slightly covered the possibility of expanding the materials capability window of this flexible forming process beyond metals, by demonstrating the workability of thermoplastic polymers at room temperature. Given the different behaviour of polymers compared to metals, different aspects need to be deepened to better understand the behaviour of these materials when incrementally formed. Thus, the aim of the work is to investigate the formability of incrementally formed polycarbonate thin sheets. To this end, an experimental investigation at room temperature was conducted involving formability tests; varying wall angle cone and pyramid frusta were manufactured by processing polycarbonate sheets with different thicknesses and using tools with different diameters, in order to draw conclusions on the formability of polymer sheets through the evaluation of the forming angles and the observation of the failure mechanisms.

  7. The global Minmax k-means algorithm.

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  8. Motion-Induced Blindness Using Increments and Decrements of Luminance

    Stine Wm Wren

    2017-10-01

    Full Text Available Motion-induced blindness describes the disappearance of stationary elements of a scene when other, perhaps non-overlapping, elements of the scene are in motion. We measured the effects of increment (200.0 cd/m2 and decrement targets (15.0 cd/m2 and masks presented on a grey background (108.0 cd/m2, tapping into putative ON- and OFF-channels, on the rate of target disappearance psychophysically. We presented two-frame motion, which has coherent motion energy, and dynamic Glass patterns and dynamic anti-Glass patterns, which do not have coherent motion energy. Using the method of constant stimuli, participants viewed stimuli of varying durations (3.1 s, 4.6 s, 7.0 s, 11 s, or 16 s in a given trial and then indicated whether or not the targets vanished during that trial. Psychometric function midpoints were used to define absolute threshold mask duration for the disappearance of the target. 95% confidence intervals for threshold disappearance times were estimated using a bootstrap technique for each of the participants across two experiments. Decrement masks were more effective than increment masks with increment targets. Increment targets were easier to mask than decrement targets. Distinct mask pattern types had no effect, suggesting that perceived coherence contributes to the effectiveness of the mask. The ON/OFF dichotomy clearly carries its influence to the level of perceived motion coherence. Further, the asymmetry in the effects of increment and decrement masks on increment and decrement targets might lead one to speculate that they reflect the ‘importance’ of detecting decrements in the environment.

  9. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors.

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual's processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people's moral character is fixed (entity theorists) and individuals who hold the implicit belief that people's moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2-4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  10. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    Niwen Huang

    2017-08-01

    Full Text Available Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE, rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2–4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  11. Online Planning Algorithm

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  12. Fatigue evaluation algorithms: Review

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  13. Apparatus for electrical-assisted incremental forming and process thereof

    Roth, John; Cao, Jian

    2018-04-24

    A process and apparatus for forming a sheet metal component using an electric current passing through the component. The process can include providing an incremental forming machine, the machine having at least one arcuate tipped tool and at least electrode spaced a predetermined distance from the arcuate tipped tool. The machine is operable to perform a plurality of incremental deformations on the sheet metal component using the arcuate tipped tool. The machine is also operable to apply an electric direct current through the electrode into the sheet metal component at the predetermined distance from the arcuate tipped tool while the machine is forming the sheet metal component.

  14. Single-point incremental forming and formability-failure diagrams

    Silva, M.B.; Skjødt, Martin; Atkins, A.G.

    2008-01-01

    In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...

  15. Short-term load forecasting with increment regression tree

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)

  16. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  17. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  18. Stochastic simulation algorithms and analysis

    Asmussen, Soren

    2007-01-01

    Sampling-based computational methods have become a fundamental part of the numerical toolset of practitioners and researchers across an enormous number of different applied domains and academic disciplines. This book provides a broad treatment of such sampling-based methods, as well as accompanying mathematical analysis of the convergence properties of the methods discussed. The reach of the ideas is illustrated by discussing a wide range of applications and the models that have found wide usage. The first half of the book focuses on general methods, whereas the second half discusses model-specific algorithms. Given the wide range of examples, exercises and applications students, practitioners and researchers in probability, statistics, operations research, economics, finance, engineering as well as biology and chemistry and physics will find the book of value.

  19. Toward translational incremental similarity-based reasoning in breast cancer grading

    Tutac, Adina E.; Racoceanu, Daniel; Leow, Wee-Keng; Müller, Henning; Putti, Thomas; Cretu, Vladimir

    2009-02-01

    One of the fundamental issues in bridging the gap between the proliferation of Content-Based Image Retrieval (CBIR) systems in the scientific literature and the deficiency of their usage in medical community is based on the characteristic of CBIR to access information by images or/and text only. Yet, the way physicians are reasoning about patients leads intuitively to a case representation. Hence, a proper solution to overcome this gap is to consider a CBIR approach inspired by Case-Based Reasoning (CBR), which naturally introduces medical knowledge structured by cases. Moreover, in a CBR system, the knowledge is incrementally added and learned. The purpose of this study is to initiate a translational solution from CBIR algorithms to clinical practice, using a CBIR/CBR hybrid approach. Therefore, we advance the idea of a translational incremental similarity-based reasoning (TISBR), using combined CBIR and CBR characteristics: incremental learning of medical knowledge, medical case-based structure of the knowledge (CBR), image usage to retrieve similar cases (CBIR), similarity concept (central for both paradigms). For this purpose, three major axes are explored: the indexing, the cases retrieval and the search refinement, applied to Breast Cancer Grading (BCG), a powerful breast cancer prognosis exam. The effectiveness of this strategy is currently evaluated over cases provided by the Pathology Department of Singapore National University Hospital, for the indexing. With its current accuracy, TISBR launches interesting perspectives for complex reasoning in future medical research, opening the way to a better knowledge traceability and a better acceptance rate of computer-aided diagnosis assistance among practitioners.

  20. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  1. Playing by the rules? Analysing incremental urban developments

    Karnenbeek, van Lilian; Janssen-Jansen, Leonie

    2018-01-01

    Current urban developments are often considered outdated and static, and the argument follows that they should become more adaptive. In this paper, we argue that existing urban development are already adaptive and incremental. Given this flexibility in urban development, understanding changes in the

  2. Size, Stability and Incremental Budgeting Outcomes in Public Universities.

    Schick, Allen G.; Hills, Frederick S.

    1982-01-01

    Examined the influence of relative size in the analysis of total dollar and workforce budgets, and changes in total dollar and workforce budgets when correlational/regression methods are used. Data suggested that size dominates the analysis of total budgets, and is not a factor when discretionary dollar increments are analyzed. (JAC)

  3. The National Institute of Education and Incremental Budgeting.

    Hastings, Anne H.

    1979-01-01

    The National Institute of Education's (NIE) history demonstrates that the relevant criteria for characterizing budgeting as incremental are not the predictability and stability of appropriations but the conditions of complexity, limited information, multiple factors, and imperfect agreement on ends; NIE's appropriations were dominated by political…

  4. Object class hierarchy for an incremental hypertext editor

    A. Colesnicov

    1995-02-01

    Full Text Available The object class hierarchy design is considered due to a hypertext editor implementation. The following basic classes were selected: the editor's coordinate system, the memory manager, the text buffer executing basic editing operations, the inherited hypertext buffer, the edit window, the multi-window shell. Special hypertext editing features, the incremental hypertext creation support and further generalizations are discussed.

  5. Bipower variation for Gaussian processes with stationary increments

    Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark

    2009-01-01

    Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...

  6. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    Zhang, Chuxu; Liu, Chuang; Yu, Lu; Zhang, Zi-Ke; Zhou, Tao

    2017-01-01

    success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt

  7. Revisiting the fundamentals of single point incremental forming by

    Silva, Beatriz; Skjødt, Martin; Martins, Paulo A.F.

    2008-01-01

    Knowledge of the physics behind the fracture of material at the transition between the inclined wall and the corner radius of the sheet is of great importance for understanding the fundamentals of single point incremental forming (SPIF). How the material fractures, what is the state of strain...

  8. Some theoretical aspects of capacity increment in gaseous diffusion

    Coates, J. H.; Guais, J. C.; Lamorlette, G.

    1975-09-01

    Facing to the sharply growing needs of enrichment services, the problem of implementing new capacities must be included in an optimized scheme spread out in time. In this paper the alternative solutions will be studied first for an unique increment decision, and then in an optimum schedule. The limits of the analysis will be discussed.

  9. Respiratory ammonia output and blood ammonia concentration during incremental exercise

    Ament, W; Huizenga, [No Value; Kort, E; van der Mark, TW; Grevink, RG; Verkerke, GJ

    The aim of this study was to investigate whether the increase of ammonia concentration and lactate concentration in blood was accompanied by an increased expiration of ammonia during graded exercise. Eleven healthy subjects performed an incremental cycle ergometer test. Blood ammonia, blood lactate

  10. Incremental concept learning with few training examples and hierarchical classification

    Bouma, H.; Eendebak, P.T.; Schutte, K.; Azzopardi, G.; Burghouts, G.J.

    2015-01-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible

  11. Factors for Radical Creativity, Incremental Creativity, and Routine, Noncreative Performance

    Madjar, Nora; Greenberg, Ellen; Chen, Zheng

    2011-01-01

    This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine,…

  12. Variance-optimal hedging for processes with stationary independent increments

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  13. Incremental exercise test performance with and without a respiratory ...

    Incremental exercise test performance with and without a respiratory gas collection system. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... Industrial- type mask wear is thought to impair exercise performance through increased respiratory dead space, flow ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  14. 78 FR 22770 - Immigration Benefits Business Transformation, Increment I; Correction

    2013-04-17

    ...-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY...: Background On August 29, 2011, DHS issued a final rule titled, Immigration Benefits Business Transformation... business processes. In this notice, we are correcting three technical errors. DATES: The effective date of...

  15. Minimizing System Modification in an Incremental Design Approach

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  16. Evidence combination for incremental decision-making processes

    Berrada, Ghita; van Keulen, Maurice; de Keijzer, Ander

    The establishment of a medical diagnosis is an incremental process highly fraught with uncertainty. At each step of this painstaking process, it may be beneficial to be able to quantify the uncertainty linked to the diagnosis and steadily update the uncertainty estimation using available sources of

  17. Geometry of finite deformations and time-incremental analysis

    Fiala, Zdeněk

    2016-01-01

    Roč. 81, May (2016), s. 230-244 ISSN 0020-7462 Institutional support: RVO:68378297 Keywords : solid mechanics * finite deformations * time-incremental analysis * Lagrangian system * evolution equation of Lie type Subject RIV: BE - Theoretical Physics Impact factor: 2.074, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020746216000330

  18. Adaptive Change Detection for Long-Term Machinery Monitoring Using Incremental Sliding-Window

    Wang, Teng; Lu, Guo-Liang; Liu, Jie; Yan, Peng

    2017-11-01

    Detection of structural changes from an operational process is a major goal in machine condition monitoring. Existing methods for this purpose are mainly based on retrospective analysis, resulting in a large detection delay that limits their usages in real applications. This paper presents a new adaptive real-time change detection algorithm, an extension of the recent research by combining with an incremental sliding-window strategy, to handle the multi-change detection in long-term monitoring of machine operations. In particular, in the framework, Hilbert space embedding of distribution is used to map the original data into the Re-producing Kernel Hilbert Space (RKHS) for change detection; then, a new adaptive threshold strategy can be developed when making change decision, in which a global factor (used to control the coarse-to-fine level of detection) is introduced to replace the fixed value of threshold. Through experiments on a range of real testing data which was collected from an experimental rotating machinery system, the excellent detection performances of the algorithm for engineering applications were demonstrated. Compared with state-of-the-art methods, the proposed algorithm can be more suitable for long-term machinery condition monitoring without any manual re-calibration, thus is promising in modern industries.

  19. Big Data Management with Incremental K-Means Trees–GPU-Accelerated Construction and Visualization

    Jun Wang

    2017-07-01

    Full Text Available While big data is revolutionizing scientific research, the tasks of data management and analytics are becoming more challenging than ever. One way to remit the difficulty is to obtain the multilevel hierarchy embedded in the data. Knowing the hierarchy enables not only the revelation of the nature of the data, it is also often the first step in big data analytics. However, current algorithms for learning the hierarchy are typically not scalable to large volumes of data with high dimensionality. To tackle this challenge, in this paper, we propose a new scalable approach for constructing the tree structure from data. Our method builds the tree in a bottom-up manner, with adapted incremental k-means. By referencing the distribution of point distances, one can flexibly control the height of the tree and the branching of each node. Dimension reduction is also conducted as a pre-process, to further boost the computing efficiency. The algorithm takes a parallel design and is implemented with CUDA (Compute Unified Device Architecture, so that it can be efficiently applied to big data. We test the algorithm with two real-world datasets, and the results are visualized with extended circular dendrograms and other visualization techniques.

  20. Scheduling and Mapping in an Incremental Design Methodology for Distributed Real-Time Embedded Systems

    Pop, Paul; Eles, Petru; Peng, Zebo

    2004-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at a minimization of the system modification cost. We consider an incremental design process that starts from an already existing system running a set of applicat......In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at a minimization of the system modification cost. We consider an incremental design process that starts from an already existing system running a set...... be added to the resulted system. Thus, we propose a heuristic which finds the set of already running applications which have to be remapped and rescheduled at the same time with mapping and scheduling the new application, such that the disturbance on the running system (expressed as the total cost implied...... by the modifications) is minimized. Once this set of applications has been determined, we outline a mapping and scheduling algorithm aimed at fulfilling the requirements stated above. The approaches have been evaluated based on extensive experiments using a large number of generated benchmarks as well as a real...

  1. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  2. An Improved Incremental Learning Approach for KPI Prognosis of Dynamic Fuel Cell System.

    Yin, Shen; Xie, Xiaochen; Lam, James; Cheung, Kie Chung; Gao, Huijun

    2016-12-01

    The key performance indicator (KPI) has an important practical value with respect to the product quality and economic benefits for modern industry. To cope with the KPI prognosis issue under nonlinear conditions, this paper presents an improved incremental learning approach based on available process measurements. The proposed approach takes advantage of the algorithm overlapping of locally weighted projection regression (LWPR) and partial least squares (PLS), implementing the PLS-based prognosis in each locally linear model produced by the incremental learning process of LWPR. The global prognosis results including KPI prediction and process monitoring are obtained from the corresponding normalized weighted means of all the local models. The statistical indicators for prognosis are enhanced as well by the design of novel KPI-related and KPI-unrelated statistics with suitable control limits for non-Gaussian data. For application-oriented purpose, the process measurements from real datasets of a proton exchange membrane fuel cell system are employed to demonstrate the effectiveness of KPI prognosis. The proposed approach is finally extended to a long-term voltage prediction for potential reference of further fuel cell applications.

  3. iTopN: Incremental Extraction of the N Most Visible Objects

    Bukauskas, Linas; Mark, Leo; Omiecinski, Edward

    2003-01-01

    The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree, which is then used to render the visible data. For immersive data explorations, where an observer...... navigates in a potentially huge data space and explores selected data regions this approach is inadequate. A scalable approach is to make the database system observer-aware and exchange the data that is visible and most relevant to the observer.In this paper we present iTopN an incremental algorithm......-LRU given the same amount of memory. Our experiments also show that for LW-LRU to perform as fast as iTopN it needs three times as much memory....

  4. Algorithmic alternatives

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  5. Combinatorial algorithms

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  6. Autodriver algorithm

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  7. Mejora de los Test de Planificabilidad para Asignación Incremental de Tareas en Sistemas Multiprocesadores de Tiempo Real

    Sergio Sáez

    2013-04-01

    Full Text Available Resumen: Durante la etapa de diseño de un sistema multiprocesador de tiempo real, los test de planificabilidad son una parte clave de los algoritmos de asignación de tareas. El uso de test de planificabilidad exactos permite aumentar la eficiencia de los algoritmos de asignación a costa de un incremento en el tiempo de computo necesario para validar una partición. Aunque existen varios estudios que mejoran el rendimiento de dichos test de planificabilidad, ninguno se ha centrado en el contexto en que dichos test se utilizan. Este trabajo presenta diversas mejoras en los test de planificabilidad exactos basándose en la naturaleza incremental del proceso de asignación. Abstract: During the design of a Real-Time Multiprocessor System, schedulability tests are a key component of the task allocation algorithms. Using exact schedulability tests increases the efficiency of these allocation algorithms, but the execution cost to validate a task partition is also greatly increased. Although several improvement to these schedulability test have been recently published, their use in the multiprocessor context is still unaddressed. This work presents several improvements to execution costs of the schedulability test when they are used by task allocation algorithms taking advantage of the incremental nature of this allocation process. Palabras clave: Sistema Multiprocesadores, Análisis de Planificabilidad, Sistema de Tiempo Real, Keywords: Multiprocessor Systems, Schedulability Analysis, Real-Time Systems

  8. Incremental projection approach of regularization for inverse problems

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  9. Will Incremental Hemodialysis Preserve Residual Function and Improve Patient Survival?

    Davenport, Andrew

    2015-01-01

    The progressive loss of residual renal function in peritoneal dialysis patients is associated with increased mortality. It has been suggested that incremental dialysis may help preserve residual renal function and improve patient survival. Residual renal function depends upon both patient related and dialysis associated factors. Maintaining patients in an over-hydrated state may be associated with better preservation of residual renal function but any benefit comes with a significant risk of cardiovascular consequences. Notably, it is only observational studies that have reported an association between dialysis patient survival and residual renal function; causality has not been established for dialysis patient survival. The tenuous connections between residual renal function and outcomes and between incremental hemodialysis and residual renal function should temper our enthusiasm for interventions in this area. PMID:25385441

  10. Power calculation of linear and angular incremental encoders

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  11. Table incremental slow injection CE-CT in lung cancer

    Yoshida, Shoji; Maeda, Tomoho; Morita, Masaru

    1988-01-01

    The purpose of this study is to evaluate tumor enhancement in lung cancer under the table incremental study with slow injection of contrast media. The early serial 8 sliced images during the slow injection (1.5 ml/sec) of contrant media were obtained. Following the early images, delayed 8 same sliced images were taken in 2 minutes later. Chacteristic enhanced patterns of the primary cancer and metastatic mediastinal lymphnode were recognized in this study. Enhancement of the primary lesion was classified in 4 patterns, irregular geographic pattern, heterogeneous pattern, homogeneous pattern and rim-enhanced pattern. In mediastinal metastatic lymphadenopathy, three enhanced patterns were obtained, heterogeneous, homogeneous and ring enhanced pattern. Some characteristic enhancement patterns according to the histopathological finding of the lung cancer were obtained. With using this incremental slow injection CE-CT, precise information about the relationship between lung cancer and adjacent mediastinal structure, and obvious staining patterns of the tumor and mediastinal lymphnode were recognized. (author)

  12. Final Safety Analysis Report (FSAR) for Building 332, Increment III

    Odell, B. N.; Toy, Jr., A. J.

    1977-08-31

    This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.

  13. Incremental exposure facilitates adaptation to sensory rearrangement. [vestibular stimulation patterns

    Lackner, J. R.; Lobovits, D. N.

    1978-01-01

    Visual-target pointing experiments were performed on 24 adult volunteers in order to compare the relative effectiveness of incremental (stepwise) and single-step exposure conditions on adaptation to visual rearrangement. The differences between the preexposure and postexposure scores served as an index of the adaptation elicited during the exposure period. It is found that both single-step and stepwise exposure to visual rearrangement elicit compensatory changes in sensorimotor coordination. However, stepwise exposure, when compared to single-step exposur in terms of the average magnitude of visual displacement over the exposure period, clearly enhances the rate of adaptation. It seems possible that the enhancement of adaptation to unusual patterns of sensory stimulation produced by incremental exposure reflects a general principle of sensorimotor function.

  14. Thermomechanical simulations and experimental validation for high speed incremental forming

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  15. Automobile sheet metal part production with incremental sheet forming

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  16. On kinematical minimum principles for rates and increments in plasticity

    Zouain, N.

    1984-01-01

    The optimization approach for elastoplastic analysis is discussed showing that some minimum principles related to numerical methods can be derived by means of duality and penalization procedures. Three minimum principles for velocity and plastic multiplier rate fields are presented in the framework of perfect plasticity. The first one is the classical Greenberg formulation. The second one, due to Capurso, is developed here with different motivation, and modified by penalization of constraints so as to arrive at a third principle for rates. The counterparts of these optimization formulations in terms of discrete increments of displacements of displacements and plastic multipliers are discussed. The third one of these minimum principles for finite increments is recognized to be closely related to Maier's formulation of holonomic plasticity. (Author) [pt

  17. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  18. Fault-tolerant incremental diagnosis with limited historical data

    Gillblad, Daniel; Holst, Anders; Steinert, Rebecca

    2006-01-01

    In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. All relevant information may not be available initially and must be acquired manually or at a cost. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. Here, we will describe how to create an incremental classification system based on a statistical model that is trained from empirical dat...

  19. Diagnosis of small hepatocellular carcinoma by incremental dynamic CT

    Uchida, Masafumi; Kumabe, Tsutomu; Edamitsu, Osamu

    1993-01-01

    Thirty cases of pathologically confirmed small hepatocellular carcinoma were examined by Incremental Dynamic CT (ICT). ICT scanned the whole liver with single-breath-hold technique; therefore, effective early contrast enhancement could be obtained for diagnosis. Among the 30 tumors, 26 were detected. The detection rate was 87%. A high detection rate was obtained in tumors more than 20 mm in diameter. Twenty-two of 26 tumors could be diagnosed correctly. ICT examination was useful for detection of small hepatocellular carcinoma. (author)

  20. Public Key Infrastructure Increment 2 (PKI Inc 2)

    2016-03-01

    across the Global Information Grid (GIG) and at rest. Using authoritative data, obtained via face-to-face identity proofing, PKI creates a credential ...operating on a network by provision of assured PKI-based credentials for any device on that network. ​​​​PKI Increment One made significant...provide assured/secure validation of revocation of an electronic/ digital credential . 2.DoD PKI shall support assured revocation status requests of

  1. The intermetallic ThRh5: microstructure and enthalpy increments

    Banerjee, Aparna; Joshi, A.R.; Kaity, Santu; Mishra, R.; Roy, S.B.

    2013-01-01

    Actinide intermetallics are one of the most interesting and important series of compounds. Thermochemistry of these compounds play significant role in understand the nature of bonding in alloys and nuclear fuel performance. In the present paper we report synthesis and characterization of thorium based intermetallic compound ThRh 5 (s) by SEM/EDX technique. The mechanical properties and enthalpy increment as a function of temperature of the alloy has been measured. (author)

  2. Systematic Luby Transform codes as incremental redundancy scheme

    Grobler, TL

    2011-09-01

    Full Text Available Transform Codes as Incremental Redundancy Scheme T. L. Grobler y, E. R. Ackermann y, J. C. Olivier y and A. J. van Zylz Department of Electrical, Electronic and Computer Engineering University of Pretoria, Pretoria 0002, South Africa Email: trienkog...@gmail.com, etienne.ackermann@ieee.org yDefence, Peace, Safety and Security (DPSS) Council for Scientific and Industrial Research (CSIR), Pretoria 0001, South Africa zDepartment of Mathematics and Applied Mathematics University of Pretoria, Pretoria 0002, South...

  3. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  4. Health level seven interoperability strategy: big data, incrementally structured.

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  5. Conservation of wildlife populations: factoring in incremental disturbance.

    Stewart, Abbie; Komers, Petr E

    2017-06-01

    Progressive anthropogenic disturbance can alter ecosystem organization potentially causing shifts from one stable state to another. This potential for ecosystem shifts must be considered when establishing targets and objectives for conservation. We ask whether a predator-prey system response to incremental anthropogenic disturbance might shift along a disturbance gradient and, if it does, whether any disturbance thresholds are evident for this system. Development of linear corridors in forested areas increases wolf predation effectiveness, while high density of development provides a safe-haven for their prey. If wolves limit moose population growth, then wolves and moose should respond inversely to land cover disturbance. Using general linear model analysis, we test how the rate of change in moose ( Alces alces ) density and wolf ( Canis lupus ) harvest density are influenced by the rate of change in land cover and proportion of land cover disturbed within a 300,000 km 2 area in the boreal forest of Alberta, Canada. Using logistic regression, we test how the direction of change in moose density is influenced by measures of land cover change. In response to incremental land cover disturbance, moose declines occurred where 43% of land cover was disturbed and wolf density declined. Wolves and moose appeared to respond inversely to incremental disturbance with the balance between moose decline and wolf increase shifting at about 43% of land cover disturbed. Conservation decisions require quantification of disturbance rates and their relationships to predator-prey systems because ecosystem responses to anthropogenic disturbance shift across disturbance gradients.

  6. Context-dependent incremental timing cells in the primate hippocampus.

    Sakon, John J; Naya, Yuji; Wirth, Sylvia; Suzuki, Wendy A

    2014-12-23

    We examined timing-related signals in primate hippocampal cells as animals performed an object-place (OP) associative learning task. We found hippocampal cells with firing rates that incrementally increased or decreased across the memory delay interval of the task, which we refer to as incremental timing cells (ITCs). Three distinct categories of ITCs were identified. Agnostic ITCs did not distinguish between different trial types. The remaining two categories of cells signaled time and trial context together: One category of cells tracked time depending on the behavioral action required for a correct response (i.e., early vs. late release), whereas the other category of cells tracked time only for those trials cued with a specific OP combination. The context-sensitive ITCs were observed more often during sessions where behavioral learning was observed and exhibited reduced incremental firing on incorrect trials. Thus, single primate hippocampal cells signal information about trial timing, which can be linked with trial type/context in a learning-dependent manner.

  7. Algorithmic Self

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  8. Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems

    Xinyu Tang,

    2010-01-25

    Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).

  9. Comparison between conservative perturbation and sampling based methods for propagation of Non-Neutronic uncertainties

    Campolina, Daniel de A.M.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2013-01-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using sampling based method is recent because of the huge computational effort required. In this work a sample space of MCNP calculations were used as a black box model to propagate the uncertainty of system parameters. The efficiency of the method was compared to a conservative method. Uncertainties in input parameters of the reactor considered non-neutronic uncertainties, including geometry dimensions and density. The effect of the uncertainties on the effective multiplication factor of the system was analyzed respect to the possibility of using many uncertainties in the same input. If the case includes more than 46 parameters with uncertainty in the same input, the sampling based method is proved to be more efficient than the conservative method. (author)

  10. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  11. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  12. A Learning Algorithm based on High School Teaching Wisdom

    Philip, Ninan Sajeeth

    2010-01-01

    A learning algorithm based on primary school teaching and learning is presented. The methodology is to continuously evaluate a student and to give them training on the examples for which they repeatedly fail, until, they can correctly answer all types of questions. This incremental learning procedure produces better learning curves by demanding the student to optimally dedicate their learning time on the failed examples. When used in machine learning, the algorithm is found to train a machine...

  13. Prediction model of ammonium uranyl carbonate calcination by microwave heating using incremental improved Back-Propagation neural network

    Li Yingwei [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Peng Jinhui, E-mail: jhpeng@kmust.edu.c [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Liu Bingguo [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Li Wei [Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Huang Daifu [No. 272 Nuclear Industry Factory, China National Nuclear Corporation, Hengyang, Hunan Province 421002 (China); Zhang Libo [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China)

    2011-05-15

    Research highlights: The incremental improved Back-Propagation neural network prediction model using the Levenberg-Marquardt algorithm based on optimizing theory is put forward. The prediction model of the nonlinear system is built, which can effectively predict the experiment of microwave calcining of ammonium uranyl carbonate (AUC). AUC can accept the microwave energy and microwave heating can quickly decompose AUC. In the experiment of microwave calcining of AUC, the contents of U and U{sup 4+} increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth. - Abstract: The incremental improved Back-Propagation (BP) neural network prediction model was put forward, which was very useful in overcoming the problems, such as long testing cycle, high testing quantity, difficulty of optimization for process parameters, many training data probably were offered by the way of increment batch and the limitation of the system memory could make the training data infeasible, which existed in the process of calcinations for ammonium uranyl carbonate (AUC) by microwave heating. The prediction model of the nonlinear system was built, which could effectively predict the experiment of microwave calcining of AUC. The predicted results indicated that the contents of U and U{sup 4+} were increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth.

  14. Prediction model of ammonium uranyl carbonate calcination by microwave heating using incremental improved Back-Propagation neural network

    Li Yingwei; Peng Jinhui; Liu Bingguo; Li Wei; Huang Daifu; Zhang Libo

    2011-01-01

    Research highlights: → The incremental improved Back-Propagation neural network prediction model using the Levenberg-Marquardt algorithm based on optimizing theory is put forward. → The prediction model of the nonlinear system is built, which can effectively predict the experiment of microwave calcining of ammonium uranyl carbonate (AUC). → AUC can accept the microwave energy and microwave heating can quickly decompose AUC. → In the experiment of microwave calcining of AUC, the contents of U and U 4+ increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth. - Abstract: The incremental improved Back-Propagation (BP) neural network prediction model was put forward, which was very useful in overcoming the problems, such as long testing cycle, high testing quantity, difficulty of optimization for process parameters, many training data probably were offered by the way of increment batch and the limitation of the system memory could make the training data infeasible, which existed in the process of calcinations for ammonium uranyl carbonate (AUC) by microwave heating. The prediction model of the nonlinear system was built, which could effectively predict the experiment of microwave calcining of AUC. The predicted results indicated that the contents of U and U 4+ were increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth.

  15. Algorithms that Defy the Gravity of Learning Curve

    2017-04-28

    yield the best perform- ing 1NN ensembles There is no magic to the gravity-defiant algorithms such as aNNE and iNNE which mani- fest that small data...isolation using nearest neighbour en- semble. Proceedings of the 2014 IEEE international conference on data mining, work- shop on incremental

  16. Sample-based Attribute Selective AnDE for Large Data

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  17. Parallel algorithms

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  18. Algorithm 865

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  19. Survey of sampling-based methods for uncertainty and sensitivity analysis

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  20. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  1. Incremental Construction of Generalized Voronoi Diagrams on Pointerless Quadtrees

    Quanjun Yin

    2014-01-01

    Full Text Available In robotics, Generalized Voronoi Diagrams (GVDs are widely used by mobile robots to represent the spatial topologies of their surrounding area. In this paper we consider the problem of constructing GVDs on discrete environments. Several algorithms that solve this problem exist in the literature, notably the Brushfire algorithm and its improved versions which possess local repair mechanism. However, when the area to be processed is very large or is of high resolution, the size of the metric matrices used by these algorithms to compute GVDs can be prohibitive. To address this issue, we propose an improvement on the current algorithms, using pointerless quadtrees in place of metric matrices to compute and maintain GVDs. Beyond the construction and reconstruction of a GVD, our algorithm further provides a method to approximate roadmaps in multiple granularities from the quadtree based GVD. Simulation tests in representative scenarios demonstrate that, compared with the current algorithms, our algorithm generally makes an order of magnitude improvement regarding memory cost when the area is larger than 210×210. We also demonstrate the usefulness of the approximated roadmaps for coarse-to-fine pathfinding tasks.

  2. Evaluation of incremental reactivity and its uncertainty in Southern California.

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  3. Product Quality Modelling Based on Incremental Support Vector Machine

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  4. Incremental Innovation and Competitive Pressure in the Presence of Discrete Innovation

    Ghosh, Arghya; Kato, Takao; Morita, Hodaka

    2017-01-01

    Technical progress consists of improvements made upon the existing technology (incremental innovation) and innovative activities aiming at entirely new technology (discrete innovation). Incremental innovation is often of limited relevance to the new technology invented by successful discrete...

  5. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2006-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  6. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  7. An Approach to Incremental Design of Distributed Embedded Systems

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...... strategies of functionality so that the already running functionality is not disturbed and there is a good chance that, later, new functionality can easily be mapped on the resulted system. The mapping and scheduling for hard real-time embedded systems are considered the context of a realistic communication...

  8. From incremental to fundamental substitution in chemical alternatives assessment

    Fantke, Peter; Weber, Roland; Scheringer, Martin

    2015-01-01

    to similarity in chemical structures and, hence, similar hazard profiles between phase-out and substitute chemicals, leading to a rather incremental than fundamental substitution. A hampered phase-out process, the lack of implementing Green Chemistry principles in chemicals design, and lack of Sustainable...... an integrated approach of all stakeholders involved toward more fundamental and function-based substitution by greener and more sustainable alternatives. Our recommendations finally constitute a starting point for identifying further research needs and for improving current alternatives assessment practice....

  9. Automating the Incremental Evolution of Controllers for Physical Robots

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...

  10. Transferring the Incremental Capacity Analysis to Lithium-Sulfur Batteries

    Knap, Vaclav; Kalogiannis, Theodoros; Purkayastha, Rajlakshmi

    2017-01-01

    In order to investigate the battery degradation and to estimate their health, various techniques can be applied. One of them, which is widely used for Lithium-ion batteries, is the incremental capacity analysis (ICA). In this work, we apply the ICA to Lithium-Sulfur batteries, which differ in many...... aspects from Lithium-ion batteries and possess unique behavior. One of the challenges of applying the ICA to Lithium-Sulfur batteries is the representation of the IC curves, as their voltage profiles are often non-monotonic, resulting in more complex IC curves. The ICA is at first applied to charge...

  11. Failure mechanisms in single-point incremental forming of metals

    Silva, Maria B.; Nielsen, Peter Søe; Bay, Niels

    2011-01-01

    The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Each...... on formability limits and development of fracture. The unified view conciliates the aforementioned different explanations on the role of necking in fracture and is consistent with the experimental observations that have been reported in the past years. The work is performed on aluminium AA1050-H111 sheets...

  12. Single Point Incremental Forming using a Dummy Sheet

    Skjødt, Martin; Silva, Beatriz; Bay, Niels

    2007-01-01

    A new version of single point incremental forming (SPIF) is presented. This version includes a dummy sheet on top of the work piece, thus forming two sheets instead of one. The dummy sheet, which is in contact with the rotating tool pin, is discarded after forming. The new set-up influences....... The possible influence of friction between the two sheets is furthermore investigated. The results show that the use of a dummy sheet reduces wear of the work piece to almost zero, but also causes a decrease in formability. Bulging of the planar sides of the pyramid is reduced and surface roughness...

  13. Scalability of a Methodology for Generating Technical Trading Rules with GAPs Based on Risk-Return Adjustment and Incremental Training

    de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.

    In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.

  14. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  15. Field Programmable Gate Array Based Parallel Strapdown Algorithm Design for Strapdown Inertial Navigation Systems

    Long-Hua Ma

    2011-08-01

    Full Text Available A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform.

  16. Switch-mode High Voltage Drivers for Dielectric Electro Active Polymer (DEAP) Incremental Actuators

    Thummala, Prasanth

    voltage DC-DC converters for driving the DEAP based incremental actuators. The DEAP incremental actuator technology has the potential to be used in various industries, e.g., automotive, space and medicine. The DEAP incremental actuator consists of three electrically isolated and mechanically connected...

  17. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... increment sensitivity index (SISI) adapter. (a) Identification. A short increment sensitivity index (SISI...

  18. Better bounds for incremental frequency allocation in bipartite graphs

    Chrobak, M.; Jeż, Łukasz; Sgall, J.

    2013-01-01

    Roč. 514, 25 November (2013), s. 75-83 ISSN 0304-3975 R&D Projects: GA AV ČR IAA100190902; GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : online algorithms * frequency allocation * graph algorithms Subject RIV: BA - General Mathematics Impact factor: 0.516, year: 2013 http://www.sciencedirect.com/science/article/pii/S0304397512004781

  19. Incremental artificial bee colony with local search to economic dispatch problem with ramp rate limits and prohibited operating zones

    Özyön, Serdar; Aydin, Doğan

    2013-01-01

    Highlights: ► Prohibited operating zone economic dispatch problem has been solved by IABC-LS. ► The losses used in the solution of the problem have been computed by B-loss matrix. ► IABC-LS method has been applied to three test systems in literature. ► The values obtained by IABC and IABC-LS are better than the results in literature. - Abstract: In this study, prohibited operating zone economic power dispatch problem which considers ramp rate limit, has been solved by incremental artificial bee colony algorithm (IABC) and incremental artificial bee colony algorithm with local search (IABC-LS) methods. The transmission line losses used in the solution of the problem have been computed by B-loss matrix. IABC, IABC-LS methods have been applied to three different test systems in literature which consist of 6, 15 and 40 generators. The attained optimum solution values have been compared with the optimum results in literature and have been discussed.

  20. Incremental Learning of Skill Collections based on Intrinsic Motivation

    Jan Hendrik Metzen

    2013-07-01

    Full Text Available Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-independent skill discoveryapproach that is suited for continuous domains. Furthermore, the agent learnsspecific skills based on intrinsic motivation mechanisms thatdetermine on which skills learning is focused at a given point in time. Weevaluate the approach in a reinforcement learning setup in two continuousdomains with complex dynamics. We show that an intrinsically motivated, skilllearning agent outperforms an agent which learns task solutions from scratch.Furthermore, we compare different intrinsic motivation mechanisms and howefficiently they make use of the agent's developmental period.

  1. STS-102 Expedition 2 Increment and Science Briefing

    2001-01-01

    Merri Sanchez, Expedition 2 Increment Manager, John Uri, Increment Scientist, and Lybrease Woodard, Lead Payload Operations Director, give an overview of the upcoming activities and objectives of the Expedition 2's (E2's) mission in this prelaunch press conference. Ms. Sanchez describes the crew rotation of Expedition 1 to E2, the timeline E2 will follow during their stay on the International Space Station (ISS), and the various flights going to the ISS and what each will bring to ISS. Mr. Uri gives details on the on-board experiments that will take place on the ISS in the fields of microgravity research, commercial, earth, life, and space sciences (such as radiation characterization, H-reflex, colloids formation and interaction, protein crystal growth, plant growth, fermentation in microgravity, etc.). He also gives details on the scientific facilities to be used (laboratory racks and equipment such as the human torso facsimile or 'phantom torso'). Ms. Woodard gives an overview of Marshall Flight Center's role in the mission. Computerized simulations show the installation of the Space Station Remote Manipulator System (SSRMS) onto the ISS and the installation of the airlock using SSRMS. Live footage shows the interior of the ISS, including crew living quarters, the Progress Module, and the Destiny Laboratory. The three then answer questions from the press.

  2. Efficient incremental relaying for packet transmission over fading channels

    Fareed, Muhammad Mehboob

    2014-07-01

    In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.

  3. Incremental learning of skill collections based on intrinsic motivation

    Metzen, Jan H.; Kirchner, Frank

    2013-01-01

    Life-long learning of reusable, versatile skills is a key prerequisite for embodied agents that act in a complex, dynamic environment and are faced with different tasks over their lifetime. We address the question of how an agent can learn useful skills efficiently during a developmental period, i.e., when no task is imposed on him and no external reward signal is provided. Learning of skills in a developmental period needs to be incremental and self-motivated. We propose a new incremental, task-independent skill discovery approach that is suited for continuous domains. Furthermore, the agent learns specific skills based on intrinsic motivation mechanisms that determine on which skills learning is focused at a given point in time. We evaluate the approach in a reinforcement learning setup in two continuous domains with complex dynamics. We show that an intrinsically motivated, skill learning agent outperforms an agent which learns task solutions from scratch. Furthermore, we compare different intrinsic motivation mechanisms and how efficiently they make use of the agent's developmental period. PMID:23898265

  4. Incremental first pass technique to measure left ventricular ejection fraction

    Kocak, R.; Gulliford, P.; Hoggard, C.; Critchley, M.

    1980-01-01

    An incremental first pass technique was devised to assess the acute effects of any drug on left ventricular ejection fraction (LVEF) with or without a physiological stress. In particular, the effects of the vasodilater isosorbide dinitrate on LVEF before and after exercise were studied in 11 patients who had suffered cardiac failure. This was achieved by recording the passage of sup(99m)Tc pertechnetate through the heart at each stage of the study using a gamma camera computer system. Consistent values for four consecutive first pass values without exercise or drug in normal subjects illustrated the reproducibility of the technique. There was no significant difference between LVEF values obtained at rest and exercise before or after oral isosorbide dinitrate with the exception of one patient with gross mitral regurgitation. The advantages of the incremental first pass technique are that the patient need not be in sinus rhythm, the effects of physiological intervention may be studied and tests may also be repeated at various intervals during long term follow-up of patients. A disadvantage of the method is the limitation in the number of sequential measurements which can be carried out due to the amount of radioactivity injected. (U.K.)

  5. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    Zhang, Chuxu

    2017-08-02

    Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.

  6. A Global Sampling Based Image Matting Using Non-Negative Matrix Factorization

    NAVEED ALAM

    2017-10-01

    Full Text Available Image matting is a technique in which a foreground is separated from the background of a given image along with the pixel wise opacity. This foreground can then be seamlessly composited in a different background to obtain a novel scene. This paper presents a global non-parametric sampling algorithm over image patches and utilizes a dimension reduction technique known as NMF (Non-Negative Matrix Factorization. Although some existing non-parametric approaches use large nearby foreground and background regions to sample patches but these approaches fail to take the whole image to sample patches. It is because of the high memory and computational requirements. The use of NMF in the proposed algorithm allows the dimension reduction which reduces the computational cost and memory requirement. The use of NMF also allow the proposed approach to use the whole foreground and background region in the image and reduces the patch complexity and help in efficient patch sampling. The use of patches not only allows the incorporation of the pixel colour but also the local image structure. The use of local structures in the image is important to estimate a high-quality alpha matte especially in the images which have regions containing high texture. The proposed algorithm is evaluated on the standard data set and obtained results are comparable to the state-of-the-art matting techniques

  7. A global sampling based image matting using non-negative matrix factorization

    Alam, N.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image matting is a technique in which a foreground is separated from the background of a given image along with the pixel wise opacity. This foreground can then be seamlessly composited in a different background to obtain a novel scene. This paper presents a global non-parametric sampling algorithm over image patches and utilizes a dimension reduction technique known as NMF (Non-Negative Matrix Factorization). Although some existing non-parametric approaches use large nearby foreground and background regions to sample patches but these approaches fail to take the whole image to sample patches. It is because of the high memory and computational requirements. The use of NMF in the proposed algorithm allows the dimension reduction which reduces the computational cost and memory requirement. The use of NMF also allow the proposed approach to use the whole foreground and background region in the image and reduces the patch complexity and help in efficient patch sampling. The use of patches not only allows the incorporation of the pixel colour but also the local image structure. The use of local structures in the image is important to estimate a high-quality alpha matte especially in the images which have regions containing high texture. The proposed algorithm is evaluated on the standard data set and obtained results are comparable to the state-of-the-art matting techniques. (author)

  8. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  9. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  10. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  11. A sampling-based Bayesian model for gas saturation estimationusing seismic AVA and marine CSEM data

    Chen, Jinsong; Hoversten, Michael; Vasco, Don; Rubin, Yoram; Hou,Zhangshuan

    2006-04-04

    We develop a sampling-based Bayesian model to jointly invertseismic amplitude versus angles (AVA) and marine controlled-sourceelectromagnetic (CSEM) data for layered reservoir models. The porosityand fluid saturation in each layer of the reservoir, the seismic P- andS-wave velocity and density in the layers below and above the reservoir,and the electrical conductivity of the overburden are considered asrandom variables. Pre-stack seismic AVA data in a selected time windowand real and quadrature components of the recorded electrical field areconsidered as data. We use Markov chain Monte Carlo (MCMC) samplingmethods to obtain a large number of samples from the joint posteriordistribution function. Using those samples, we obtain not only estimatesof each unknown variable, but also its uncertainty information. Thedeveloped method is applied to both synthetic and field data to explorethe combined use of seismic AVA and EM data for gas saturationestimation. Results show that the developed method is effective for jointinversion, and the incorporation of CSEM data reduces uncertainty influid saturation estimation, when compared to results from inversion ofAVA data only.

  12. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  13. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  14. Status of XSUSA for sampling based nuclear data uncertainty and sensitivity analysis

    Zwermann, W.; Gallner, L.; Klein, M.; Krzydacz-Hausmann; Pasichnyk, I.; Pautz, A.; Velkov, K.

    2013-01-01

    In the present contribution, an overview of the sampling based XSUSA method for sensitivity and uncertainty analysis with respect to nuclear data is given. The focus is on recent developments and applications of XSUSA. These applications include calculations for critical assemblies, fuel assembly depletion calculations, and steady state as well as transient reactor core calculations. The analyses are partially performed in the framework of international benchmark working groups (UACSA - Uncertainty Analyses for Criticality Safety Assessment, UAM - Uncertainty Analysis in Modelling). It is demonstrated that particularly for full-scale reactor calculations the influence of the nuclear data uncertainties on the results can be substantial. For instance, for the radial fission rate distributions of mixed UO 2 /MOX light water reactor cores, the 2σ uncertainties in the core centre and periphery can reach values exceeding 10%. For a fast transient, the resulting time behaviour of the reactor power was covered by a wide uncertainty band. Overall, the results confirm the necessity of adding systematic uncertainty analyses to best-estimate reactor calculations. (authors)

  15. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  16. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  17. An improved harmony search algorithm for power economic load dispatch

    Santos Coelho, Leandro dos [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, PPGEPS, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil)], E-mail: leandro.coelho@pucpr.br; Mariani, Viviana Cocco [Pontifical Catholic University of Parana, PUCPR, Department of Mechanical Engineering, PPGEM, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil)], E-mail: viviana.mariani@pucpr.br

    2009-10-15

    A meta-heuristic algorithm called harmony search (HS), mimicking the improvisation process of music players, has been recently developed. The HS algorithm has been successful in several optimization problems. The HS algorithm does not require derivative information and uses stochastic random search instead of a gradient search. In addition, the HS algorithm is simple in concept, few in parameters, and easy in implementation. This paper presents an improved harmony search (IHS) algorithm based on exponential distribution for solving economic dispatch problems. A 13-unit test system with incremental fuel cost function taking into account the valve-point loading effects is used to illustrate the effectiveness of the proposed IHS method. Numerical results show that the IHS method has good convergence property. Furthermore, the generation costs of the IHS method are lower than those of the classical HS and other optimization algorithms reported in recent literature.

  18. An improved harmony search algorithm for power economic load dispatch

    Coelho, Leandro dos Santos [Pontifical Catholic Univ. of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, PPGEPS, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil); Mariani, Viviana Cocco [Pontifical Catholic Univ. of Parana, PUCPR, Dept. of Mechanical Engineering, PPGEM, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil)

    2009-10-15

    A meta-heuristic algorithm called harmony search (HS), mimicking the improvisation process of music players, has been recently developed. The HS algorithm has been successful in several optimization problems. The HS algorithm does not require derivative information and uses stochastic random search instead of a gradient search. In addition, the HS algorithm is simple in concept, few in parameters, and easy in implementation. This paper presents an improved harmony search (IHS) algorithm based on exponential distribution for solving economic dispatch problems. A 13-unit test system with incremental fuel cost function taking into account the valve-point loading effects is used to illustrate the effectiveness of the proposed IHS method. Numerical results show that the IHS method has good convergence property. Furthermore, the generation costs of the IHS method are lower than those of the classical HS and other optimization algorithms reported in recent literature. (author)

  19. An improved harmony search algorithm for power economic load dispatch

    Santos Coelho, Leandro dos; Mariani, Viviana Cocco

    2009-01-01

    A meta-heuristic algorithm called harmony search (HS), mimicking the improvisation process of music players, has been recently developed. The HS algorithm has been successful in several optimization problems. The HS algorithm does not require derivative information and uses stochastic random search instead of a gradient search. In addition, the HS algorithm is simple in concept, few in parameters, and easy in implementation. This paper presents an improved harmony search (IHS) algorithm based on exponential distribution for solving economic dispatch problems. A 13-unit test system with incremental fuel cost function taking into account the valve-point loading effects is used to illustrate the effectiveness of the proposed IHS method. Numerical results show that the IHS method has good convergence property. Furthermore, the generation costs of the IHS method are lower than those of the classical HS and other optimization algorithms reported in recent literature.

  20. Adaptive local learning in sampling based motion planning for protein folding.

    Ekenna, Chinwe; Thomas, Shawna; Amato, Nancy M

    2016-08-01

    Simulating protein folding motions is an important problem in computational biology. Motion planning algorithms, such as Probabilistic Roadmap Methods, have been successful in modeling the folding landscape. Probabilistic Roadmap Methods and variants contain several phases (i.e., sampling, connection, and path extraction). Most of the time is spent in the connection phase and selecting which variant to employ is a difficult task. Global machine learning has been applied to the connection phase but is inefficient in situations with varying topology, such as those typical of folding landscapes. We develop a local learning algorithm that exploits the past performance of methods within the neighborhood of the current connection attempts as a basis for learning. It is sensitive not only to different types of landscapes but also to differing regions in the landscape itself, removing the need to explicitly partition the landscape. We perform experiments on 23 proteins of varying secondary structure makeup with 52-114 residues. We compare the success rate when using our methods and other methods. We demonstrate a clear need for learning (i.e., only learning methods were able to validate against all available experimental data) and show that local learning is superior to global learning producing, in many cases, significantly higher quality results than the other methods. We present an algorithm that uses local learning to select appropriate connection methods in the context of roadmap construction for protein folding. Our method removes the burden of deciding which method to use, leverages the strengths of the individual input methods, and it is extendable to include other future connection methods.

  1. Rapid Prototyping by Single Point Incremental Forming of Sheet Metal

    Skjødt, Martin

    2008-01-01

    . The process is incremental forming since plastic deformation takes place in a small local zone underneath the forming tool, i.e. the sheet is formed as a summation of the movement of the local plastic zone. The process is slow and therefore only suited for prototypes or small batch production. On the other...... in the plastic zone. Using these it is demonstrated that the growth rate of accumulated damage in SPIF is small compared to conventional sheet forming processes. This combined with an explanation why necking is suppressed is a new theory stating that SPIF is limited by fracture and not necking. The theory...... SPIF. A multi stage strategy is presented which allows forming of a cup with vertical sides in about half of the depth. It is demonstrated that this results in strain paths which are far from straight, but strains are still limited by a straight fracture line in the principal strain space. The multi...

  2. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.

  3. Incremental and developmental perspectives for general-purpose learning systems

    Fernando Martínez-Plumed

    2017-02-01

    Full Text Available The stupefying success of Articial Intelligence (AI for specic problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of (dierent problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. It presents a declarative general-purpose learning system and a developmental and lifelong approach for knowledge acquisition, consolidation and forgetting. It also analyses the use of the use of more ability-oriented evaluation techniques for AI evaluation and provides further insight for the understanding of the concepts of development and incremental learning in AI systems.

  4. Improving process performance in Incremental Sheet Forming (ISF)

    Ambrogio, G.; Filice, L.; Manco, G. L.

    2011-01-01

    Incremental Sheet Forming (ISF) is a relatively new process in which a sheet clamped along the borders is progressively deformed through a hemispherical tool. The tool motion is CNC controlled and the path is designed using a CAD-CAM approach, with the aim to reproduce the final shape contour such as in the surface milling. The absence of a dedicated setup and the related high flexibility is the main point of strength and the reason why several researchers focused their attentions on the ISF process.On the other hand the process slowness is the most relevant drawback which reduces a wider industrial application. In the paper, a first attempt to overcome this process limitation is presented taking into account a relevant speed increasing respect to the values currently used.

  5. Algorithmic chemistry

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  6. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning

    Fox, Christopher; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique

  7. Incremental change in the set of coactive cortical assemblies enables mental continuity.

    Reser, Jared Edward

    2016-12-01

    This opinion article explores how sustained neural firing in association areas allows high-order mental representations to be coactivated over multiple perception-action cycles, permitting sequential mental states to share overlapping content and thus be recursively interrelated. The term "state-spanning coactivity" (SSC) is introduced to refer to neural nodes that remain coactive as a group over a given period of time. SSC ensures that contextual groupings of goal or motor-relevant representations will demonstrate continuous activity over a delay period. It also allows potentially related representations to accumulate and coactivate despite delays between their initial appearances. The nodes that demonstrate SSC are a subset of the active representations from the previous state, and can act as referents to which newly introduced representations of succeeding states relate. Coactive nodes pool their spreading activity, converging on and activating new nodes, adding these to the remaining nodes from the previous state. Thus, the overall distribution of coactive nodes in cortical networks evolves gradually during contextual updating. The term "incremental change in state-spanning coactivity" (icSSC) is introduced to refer to this gradual evolution. Because a number of associated representations can be sustained continuously, each brain state is embedded recursively in the previous state, amounting to an iterative process that can implement learned algorithms to progress toward a complex result. The longer representations are sustained, the more successive mental states can share related content, exhibit progressive qualities, implement complex algorithms, and carry thematic or narrative continuity. Included is a discussion of the implications that SSC and icSSC may have for understanding working memory, defining consciousness, and constructing AI architectures. Copyright © 2016 The Author. Published by Elsevier Inc. All rights reserved.

  8. Automated test data generation for branch testing using incremental

    Cost of software testing can be reduced by automated test data generation to find a minimal set of data that has maximum coverage. Search-based software testing (SBST) is one of the techniques recently used for automated testing task. SBST makes use of control flow graph (CFG) and meta-heuristic search algorithms to ...

  9. Incremental online object learning in a vehicular radar-vision fusion framework

    Ji, Zhengping [Los Alamos National Laboratory; Weng, Juyang [Los Alamos National Laboratory; Luciw, Matthew [IEEE; Zeng, Shuqing [IEEE

    2010-10-19

    In this paper, we propose an object learning system that incorporates sensory information from an automotive radar system and a video camera. The radar system provides a coarse attention for the focus of visual analysis on relatively small areas within the image plane. The attended visual areas are coded and learned by a 3-layer neural network utilizing what is called in-place learning, where every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through inhibitory and excitatory connections with other neurons. The modeled bottom-up, lateral, and top-down connections in the network enable sensory sparse coding, unsupervised learning and supervised learning to occur concurrently. The presented work is applied to learn two types of encountered objects in multiple outdoor driving settings. Cross validation results show the overall recognition accuracy above 95% for the radar-attended window images. In comparison with the uncoded representation and purely unsupervised learning (without top-down connection), the proposed network improves the recognition rate by 15.93% and 6.35% respectively. The proposed system is also compared with other learning algorithms favorably. The result indicates that our learning system is the only one to fit all the challenging criteria for the development of an incremental and online object learning system.

  10. Modeling of optimization strategies in the incremental CNC sheet metal forming process

    Bambach, M.; Hirt, G.; Ames, J.

    2004-01-01

    Incremental CNC sheet forming (ISF) is a relatively new sheet metal forming process for small batch production and prototyping. In ISF, a blank is shaped by the CNC movements of a simple tool in combination with a simplified die. The standard forming strategies in ISF entail two major drawbacks: (i) the inherent forming kinematics set limits on the maximum wall angle that can be formed with ISF. (ii) since elastic parts of the imposed deformation can currently not be accounted for in CNC code generation, the standard strategies can lead to undesired deviations between the target and the sample geometry.Several enhancements have recently been put forward to overcome the above limitations, among them a multistage forming strategy to manufacture steep flanges, and a correction algorithm to improve the geometric accuracy. Both strategies have been successful in improving the forming of simple parts. However, the high experimental effort to empirically optimize the tool paths motivates the use of process modeling techniques.This paper deals with finite element modeling of the ISF process. In particular, the outcome of different multistage strategies is modeled and compared to collated experimental results regarding aspects such as sheet thickness and the onset of wrinkling. Moreover, the feasibility of modeling the geometry of a part is investigated as this is of major importance with respect to optimizing the geometric accuracy. Experimental validation is achieved by optical deformation measurement that gives the local displacements and strains of the sheet during forming as benchmark quantities for the simulation

  11. A user-friendly tool for incremental haemodialysis prescription.

    Casino, Francesco Gaetano; Basile, Carlo

    2018-01-05

    There is a recently heightened interest in incremental haemodialysis (IHD), the main advantage of which could likely be a better preservation of the residual kidney function of the patients. The implementation of IHD, however, is hindered by many factors, among them, the mathematical complexity of its prescription. The aim of our study was to design a user-friendly tool for IHD prescription, consisting of only a few rows of a common spreadsheet. The keystone of our spreadsheet was the following fundamental concept: the dialysis dose to be prescribed in IHD depends only on the normalized urea clearance provided by the native kidneys (KRUn) of the patient for each frequency of treatment, according to the variable target model recently proposed by Casino and Basile (The variable target model: a paradigm shift in the incremental haemodialysis prescription. Nephrol Dial Transplant 2017; 32: 182-190). The first step was to put in sequence a series of equations in order to calculate, firstly, KRUn and, then, the key parameters to be prescribed for an adequate IHD; the second step was to compare KRUn values obtained with our spreadsheet with KRUn values obtainable with the gold standard Solute-solver (Daugirdas JT et al., Solute-solver: a web-based tool for modeling urea kinetics for a broad range of hemodialysis schedules in multiple patients. Am J Kidney Dis 2009; 54: 798-809) in a sample of 40 incident haemodialysis patients. Our spreadsheet provided excellent results. The differences with Solute-solver were clinically negligible. This was confirmed by the Bland-Altman plot built to analyse the agreement between KRUn values obtained with the two methods: the difference was 0.07 ± 0.05 mL/min/35 L. Our spreadsheet is a user-friendly tool able to provide clinically acceptable results in IHD prescription. Two immediate consequences could derive: (i) a larger dissemination of IHD might occur; and (ii) our spreadsheet could represent a useful tool for an ineludibly

  12. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    Vermeulen, Patrick; Bosch, Frans; Volberda, Henk

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation. In this paper, we use an institutional perspective to investigate why established firms in the financial services industry struggle with their complex incremental product innovation efforts. We ar...

  13. Incremental cost of PACS in a medical intensive care unit

    Langlotz, Curtis P.; Cleff, Bridget; Even-Shoshan, Orit; Bozzo, Mary T.; Redfern, Regina O.; Brikman, Inna; Seshadri, Sridhar B.; Horii, Steven C.; Kundel, Harold L.

    1995-05-01

    Our purpose is to determine the incremental costs (or savings) due to the introduction of picture archiving and communication systems (PACS) and computed radiology (CR) in a medical intensive care unit (MICU). Our economic analysis consists of three measurement methods. The first method is an assessment of the direct costs to the radiology department, implemented in a spreadsheet model. The second method consists of a series of brief observational studies to measure potential changes in personnel costs that might not be reflected in administrative claims. The third method (results not reported here) is a multivariate modeling technique which estimates the independent effect of PACS/CR on the cost of care (estimated from administrative claims data), while controlling for clinical case- mix variables. Our direct cost model shows no cost savings to the radiology department after the introduction of PACS in the medical intensive care unit. Savings in film supplies and film library personnel are offset by increases in capital equipment costs and PACS operation personnel. The results of observational studies to date demonstrate significant savings in clinician film-search time, but no significant change in technologist time or lost films. Our model suggests that direct radiology costs will increase after the limited introduction of PACS/CR in the MICU. Our observational studies show a small but significant effect on clinician film search time by the introduction of PACS/CR in the MICU, but no significant effect on other variables. The projected costs of a hospital-wide PACS are currently under study.

  14. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    Abdelhamid, Ehab

    2017-08-22

    Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.

  15. A novel instrument for generating angular increments of 1 nanoradian

    Alcock, Simon G.; Bugnar, Alex; Nistea, Ioana; Sawhney, Kawal; Scott, Stewart; Hillman, Michael; Grindrod, Jamie; Johnson, Iain

    2015-12-01

    Accurate generation of small angles is of vital importance for calibrating angle-based metrology instruments used in a broad spectrum of industries including mechatronics, nano-positioning, and optic fabrication. We present a novel, piezo-driven, flexure device capable of reliably generating micro- and nanoradian angles. Unlike many such instruments, Diamond Light Source's nano-angle generator (Diamond-NANGO) does not rely on two separate actuators or rotation stages to provide coarse and fine motion. Instead, a single Physik Instrumente NEXLINE "PiezoWalk" actuator provides millimetres of travel with nanometre resolution. A cartwheel flexure efficiently converts displacement from the linear actuator into rotary motion with minimal parasitic errors. Rotation of the flexure is directly measured via a Magnescale "Laserscale" angle encoder. Closed-loop operation of the PiezoWalk actuator, using high-speed feedback from the angle encoder, ensures that the Diamond-NANGO's output drifts by only ˜0.3 nrad rms over ˜30 min. We show that the Diamond-NANGO can reliably move with unprecedented 1 nrad (˜57 ndeg) angular increments over a range of >7000 μrad. An autocollimator, interferometer, and capacitive displacement sensor are used to independently confirm the Diamond-NANGO's performance by simultaneously measuring the rotation of a reflective cube.

  16. Validation of daily increments periodicity in otoliths of spotted gar

    Snow, Richard A.; Long, James M.; Frenette, Bryan D.

    2017-01-01

    Accurate age and growth information is essential in successful management of fish populations and for understanding early life history. We validated daily increment deposition, including the timing of first ring formation, for spotted gar (Lepisosteus oculatus) through 127 days post hatch. Fry were produced from hatchery-spawned specimens, and up to 10 individuals per week were sacrificed and their otoliths (sagitta, lapillus, and asteriscus) removed for daily age estimation. Daily age estimates for all three otolith pairs were significantly related to known age. The strongest relationships existed for measurements from the sagitta (r2 = 0.98) and the lapillus (r2 = 0.99) with asteriscus (r2 = 0.95) the lowest. All age prediction models resulted in a slope near unity, indicating that ring deposition occurred approximately daily. Initiation of ring formation varied among otolith types, with deposition beginning 3, 7, and 9 days for the sagitta, lapillus, and asteriscus, respectively. Results of this study suggested that otoliths are useful to estimate daily age of spotted gar juveniles; these data may be used to back calculate hatch dates, estimate early growth rates, and correlate with environmental factor that influence spawning in wild populations. is early life history information will be valuable in better understanding the ecology of this species. 

  17. VOLATILITAS RELEVANSI NILAI INCREMENTAL DARI LABA DAN NILAI BUKU

    B. Linggar Yekti Nugraheni

    2012-03-01

    Full Text Available Dalam penelitian ini dikaji relevansi nilai pola volatilitas pendapatan dan equitas nilai buku. Diprediksi bahwa relevansi nilai volatilitas adalah dikaitkan dengan horizon waktu. Menggunakan model Ohslon, hipotesis yang dikembangkan adalah (1 laba dan nilai buku ekuitas berhubungan positif dengan harga saham, dan (2 ada penurunan atau peningkatan patern relevansi nilai tambahan. Sampel yang digunakan dalam penelitian ini adalah perusahaan manufaktur yang terdaftar di BEI (Bursa Efek Indonesia selama periode 1998-2007. Hasilnya menunjukkan bahwa earnings dan nilai buku ekuitas terkait secara positif dengan harga saham dan relevansi nilai laba menurun sementara ekuitas nilai buku ekuitas selama periode pengamatan. This research investigates the value of relevance volatility patern of earnings and book value equity. It is predicted that the volatility of value relevance is associated with the time horizon. Using the Ohslon model, the hypothesis developed are: (1 earnings and book value equity are associated positively with the stock price (2 There is decrease or increase patern of incremental value relevance. The sample used in this research is manufacturing companies listed in ISE (Indonesia Stock Exchange during 1998-2007 periods. The result shows that earnings and book value equity are related positively with stock price and value relevance of earnings is decreasing while book value equity is increasing during the observation periods.

  18. Incremental Dynamic Analysis of Koyna Dam under Repeated Ground Motions

    Zainab Nik Azizan, Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar; Abdullah, Junaidah

    2018-03-01

    This paper discovers the incremental dynamic analysis (IDA) of concrete gravity dam under single and repeated earthquake loadings to identify the limit state of the dam. Seven ground motions with horizontal and vertical direction as seismic input considered in the nonlinear dynamic analysis based on the real repeated earthquake in the worldwide. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. The scaled was depends on the fundamental period, T1 of the dam. The Koyna dam has been selected as a case study for the purpose of the analysis by assuming that no sliding and rigid foundation, has been estimated. IDA curves for Koyna dam developed for single and repeated ground motions and the performance level of the dam identifies. The IDA curve of repeated ground motion shown stiffer rather than single ground motion. The ultimate state displacement for a single event is 45.59mm and decreased to 39.33mm under repeated events which are decreased about 14%. This showed that the performance level of the dam based on seismic loadings depend on ground motion pattern.

  19. Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

    Roberts, Seán G.

    2018-01-01

    This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487

  20. Distribution of incremental static stress caused by earthquakes

    Y. Y. Kagan

    1994-01-01

    Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.

  1. Noise masking of S-cone increments and decrements.

    Wang, Quanhong; Richters, David P; Eskew, Rhea T

    2014-11-12

    S-cone increment and decrement detection thresholds were measured in the presence of bipolar, dynamic noise masks. Noise chromaticities were the L-, M-, and S-cone directions, as well as L-M, L+M, and achromatic (L+M+S) directions. Noise contrast power was varied to measure threshold Energy versus Noise (EvN) functions. S+ and S- thresholds were similarly, and weakly, raised by achromatic noise. However, S+ thresholds were much more elevated by S, L+M, L-M, L- and M-cone noises than were S- thresholds, even though the noises consisted of two symmetric chromatic polarities of equal contrast power. A linear cone combination model accounts for the overall pattern of masking of a single test polarity well. L and M cones have opposite signs in their effects upon raising S+ and S- thresholds. The results strongly indicate that the psychophysical mechanisms responsible for S+ and S- detection, presumably based on S-ON and S-OFF pathways, are distinct, unipolar mechanisms, and that they have different spatiotemporal sampling characteristics, or contrast gains, or both. © 2014 ARVO.

  2. Business Collaboration in Food Networks: Incremental Solution Development

    Harald Sundmaeker

    2014-10-01

    Full Text Available The paper will present an approach for an incremental solution development that is based on the usage of the currently developed Internet based FIspace business collaboration platform. Key element is the clear segmentation of infrastructures that are either internal or external to the collaborating business entity in the food network. On the one hand, the approach enables to differentiate between specific centralised as well as decentralised ways for data storage and hosting of IT based functionalities. The selection of specific dataexchange protocols and data models is facilitated. On the other hand, the supported solution design and subsequent development is focusing on reusable “software Apps” that can be used on their own and are incorporating a clear added value for the business actors. It will be outlined on how to push the development and introduction of Apps that do not require basic changes of the existing infrastructure. The paper will present an example that is based on the development of a set of Apps for the exchange of product quality related information in food networks, specifically addressing fresh fruits and vegetables. It combines workflow support for data exchange from farm to retail as well as to provide quality feedback information to facilitate the business process improvement. Finally, the latest status of theFIspace platform development will be outlined. Key features and potential ways for real users and software developers in using the FIspace platform that is initiated by science and industry will be outlined.

  3. Automating the Incremental Evolution of Controllers for Physical Robots.

    Faíña, Andrés; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben, Kernbach, & Haasdijk [2012, p. 261] put it, "the evolution of things, rather than just the evolution of digital objects.…" The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range of problems amenable to embodied evolution.

  4. Adaptive Sampling based 3D Profile Measuring Method for Free-Form Surface

    Duan, Xianyin; Zou, Yu; Gao, Qiang; Peng, Fangyu; Zhou, Min; Jiang, Guozhang

    2018-03-01

    In order to solve the problem of adaptability and scanning efficiency of the current surface profile detection device, a high precision and high efficiency detection approach is proposed for surface contour of free-form surface parts based on self- adaptability. The contact mechanical probe and the non-contact laser probe are synthetically integrated according to the sampling approach of adaptive front-end path detection. First, the front-end path is measured by the non-contact laser probe, and the detection path is planned by the internal algorithm of the measuring instrument. Then a reasonable measurement sampling is completed according to the planned path by the contact mechanical probe. The detection approach can effectively improve the measurement efficiency of the free-form surface contours and can simultaneously detect the surface contours of unknown free-form surfaces with different curvatures and even different rate of curvature. The detection approach proposed in this paper also has important reference value for free-form surface contour detection.

  5. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  6. Pseudo-deterministic Algorithms

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  7. Sampling-based real-time motion planning under state uncertainty for autonomous micro-aerial vehicles in GPS-denied environments.

    Li, Dachuan; Li, Qing; Cheng, Nong; Song, Jingyan

    2014-11-18

    This paper presents a real-time motion planning approach for autonomous vehicles with complex dynamics and state uncertainty. The approach is motivated by the motion planning problem for autonomous vehicles navigating in GPS-denied dynamic environments, which involves non-linear and/or non-holonomic vehicle dynamics, incomplete state estimates, and constraints imposed by uncertain and cluttered environments. To address the above motion planning problem, we propose an extension of the closed-loop rapid belief trees, the closed-loop random belief trees (CL-RBT), which incorporates predictions of the position estimation uncertainty, using a factored form of the covariance provided by the Kalman filter-based estimator. The proposed motion planner operates by incrementally constructing a tree of dynamically feasible trajectories using the closed-loop prediction, while selecting candidate paths with low uncertainty using efficient covariance update and propagation. The algorithm can operate in real-time, continuously providing the controller with feasible paths for execution, enabling the vehicle to account for dynamic and uncertain environments. Simulation results demonstrate that the proposed approach can generate feasible trajectories that reduce the state estimation uncertainty, while handling complex vehicle dynamics and environment constraints.

  8. A Proposal on the Advanced Sampling Based Sensitivity and Uncertainty Analysis Method for the Eigenvalue Uncertainty Analysis

    Kim, Song Hyun; Song, Myung Sub; Shin, Chang Ho; Noh, Jae Man

    2014-01-01

    In using the perturbation theory, the uncertainty of the response can be estimated by a single transport simulation, and therefore it requires small computational load. However, it has a disadvantage that the computation methodology must be modified whenever estimating different response type such as multiplication factor, flux, or power distribution. Hence, it is suitable for analyzing few responses with lots of perturbed parameters. Statistical approach is a sampling based method which uses randomly sampled cross sections from covariance data for analyzing the uncertainty of the response. XSUSA is a code based on the statistical approach. The cross sections are only modified with the sampling based method; thus, general transport codes can be directly utilized for the S/U analysis without any code modifications. However, to calculate the uncertainty distribution from the result, code simulation should be enough repeated with randomly sampled cross sections. Therefore, this inefficiency is known as a disadvantage of the stochastic method. In this study, an advanced sampling method of the cross sections is proposed and verified to increase the estimation efficiency of the sampling based method. In this study, to increase the estimation efficiency of the sampling based S/U method, an advanced sampling and estimation method was proposed. The main feature of the proposed method is that the cross section averaged from each single sampled cross section is used. For the use of the proposed method, the validation was performed using the perturbation theory

  9. Appropriate use of the increment entropy for electrophysiological time series.

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Magneto-sensor circuit efficiency incremented by Fourier-transformation

    Talukdar, Abdul Hafiz Ibne; Useinov, Arthur; Hussain, Muhammad Mustafa

    2011-01-01

    In this paper detection by recognized intelligent algorithm for different magnetic films with the aid of a cost-effective and simple high efficient circuit are realized. Well-known, magnetic films generate oscillating frequencies when they stay a part of an LC- oscillatory circuit. These frequencies can be further analyzed to gather information about their magnetic properties. For the first time in this work we apply the signal analysis in frequency domain to create the Fourier frequency spectra which was used to detect the sample properties and their recognition. In this paper we have summarized both the simulation and experimental results. © 2011 Elsevier Ltd. All rights reserved.

  11. Magneto-sensor circuit efficiency incremented by Fourier-transformation

    Talukdar, Abdul Hafiz Ibne

    2011-10-01

    In this paper detection by recognized intelligent algorithm for different magnetic films with the aid of a cost-effective and simple high efficient circuit are realized. Well-known, magnetic films generate oscillating frequencies when they stay a part of an LC- oscillatory circuit. These frequencies can be further analyzed to gather information about their magnetic properties. For the first time in this work we apply the signal analysis in frequency domain to create the Fourier frequency spectra which was used to detect the sample properties and their recognition. In this paper we have summarized both the simulation and experimental results. © 2011 Elsevier Ltd. All rights reserved.

  12. Hamiltonian Algorithm Sound Synthesis

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  13. Progressive geometric algorithms

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  14. Progressive geometric algorithms

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  15. Lactate and ammonia concentration in blood and sweat during incremental cycle ergometer exercise

    Ament, W; Huizenga, [No Value; Mook, GA; Gips, CH; Verkerke, GJ

    It is known that the concentrations of ammonia and lactate in blood increase during incremental exercise. Sweat also contains lactate and ammonia. The aim of the present study was to investigate the physiological response of lactate and ammonia in plasma and sweat during a stepwise incremental cycle

  16. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…

  17. The Algorithmic Imaginary

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  18. The BR eigenvalue algorithm

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  19. Design of Incremental Conductance Sliding Mode MPPT Control Applied by Integrated Photovoltaic and Proton Exchange Membrane Fuel Cell System under Various Operating Conditions for BLDC Motor

    Jehun Hahm

    2015-01-01

    Full Text Available This paper proposes an integrated photovoltaic (PV and proton exchange membrane fuel cell (PEMFC system for continuous energy harvesting under various operating conditions for use with a brushless DC motor. The proposed scheme is based on the incremental conductance (IncCond algorithm combined with the sliding mode technique. Under changing atmospheric conditions, the energy conversion efficiency of a PV array is very low, leading to significant power losses. Consequently, increasing efficiency by means of maximum power point tracking (MPPT is particularly important. To manage such a hybrid system, control strategies need to be established to achieve the aim of the distributed system. Firstly, a Matlab/Simulink based model of the PV and PEMFC is developed and validated, as well as the incremental conductance sliding (ICS MPPT technique; then, different MPPT algorithms are employed to control the PV array under nonuniform temperature and insolation conditions, to study these algorithms effectiveness under various operating conditions. Conventional techniques are easy to implement but produce oscillations at MPP. Compared to these techniques, the proposed technique is more efficient; it produces less oscillation at MPP in the steady state and provides more precise tracking.

  20. Algorithmically specialized parallel computers

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  1. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  2. Seismic noise attenuation using an online subspace tracking algorithm

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  3. Parameterless evolutionary algorithm applied to the nuclear reload problem

    Caldas, Gustavo Henrique Flores; Schirru, Roberto

    2008-01-01

    In this work, an evolutionary algorithm with no parameters called FPBIL (parameter free PBIL) is developed based on PBIL (population-based incremental learning). Moreover, the analysis reveals how the parameters from PBIL can be replaced by self-adaptable mechanisms which appear from the radically different form by which the evolution is processed. Despite the advantages, the FPBIL reveals itself compact and relatively modest in the use of computational resources. The FPBIL is then applied to the nuclear reload problem. The experimental results observed are compared to those of other works and corroborate to affirm the superiority of the new algorithm

  4. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  5. A scalable and practical one-pass clustering algorithm for recommender system

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  6. Do otolith increments allow correct inferences about age and growth of coral reef fishes?

    Booth, D. J.

    2014-03-01

    Otolith increment structure is widely used to estimate age and growth of marine fishes. Here, I test the accuracy of the long-term otolith increment analysis of the lemon damselfish Pomacentrus moluccensis to describe age and growth characteristics. I compare the number of putative annual otolith increments (as a proxy for actual age) and widths of these increments (as proxies for somatic growth) with actual tagged fish-length data, based on a 6-year dataset, the longest time course for a coral reef fish. Estimated age from otoliths corresponded closely with actual age in all cases, confirming annual increment formation. However, otolith increment widths were poor proxies for actual growth in length [linear regression r 2 = 0.44-0.90, n = 6 fish] and were clearly of limited value in estimating annual growth. Up to 60 % of the annual growth variation was missed using otolith increments, suggesting the long-term back calculations of otolith growth characteristics of reef fish populations should be interpreted with caution.

  7. Quantum Computation and Algorithms

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  8. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  9. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  10. On the validity of the incremental approach to estimate the impact of cities on air quality

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  11. Annual increments, specific gravity and energy of Eucalyptus grandis by gamma-ray attenuation technique

    Rezende, M.A.; Guerrini, I.A.; Ferraz, E.S.B.

    1990-01-01

    Specific gravity annual increments in volume, mass and energy of Eucalyptus grandis at thirteen years of age were made taking into account measurements of the calorific value for wood. It was observed that the calorific value for wood decrease slightly, while the specific gravity increase significantly with age. The so-called culmination age for the Annual Volume Increment was determined to be around fourth year of growth while for the Annual Mass and Energy Increment was around the eighty year. These results show that a tree in a particular age may not have a significant growth in volume, yet one is mass and energy. (author)

  12. Sustained change blindness to incremental scene rotation: a dissociation between explicit change detection and visual memory.

    Hollingworth, Andrew; Henderson, John M

    2004-07-01

    In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.

  13. Fermion cluster algorithms

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  14. Autonomous Star Tracker Algorithms

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  15. A verified LLL algorithm

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  16. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  17. Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality

    Acikmese, Ahmet Behcet; Martin, Corless

    2004-01-01

    We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.

  18. Performance and delay analysis of hybrid ARQ with incremental redundancy over double rayleigh fading channels

    Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Barry, John R.; Pä tzold, Matthias

    2014-01-01

    the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum

  19. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition

    Lovett, Andrew; Dehghani, Morteza; Forbus, Kenneth

    2007-01-01

    .... This paper describes an incremental learning technique for opendomain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches...

  20. Performance of hybrid-ARQ with incremental redundancy over relay channels

    Chelli, Ali; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ) with incremental redundancy (IR). The relay overhears

  1. Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction

    Sieberling, S.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper presents a flight control strategy based on nonlinear dynamic inversion. The approach presented, called incremental nonlinear dynamic inversion, uses properties of general mechanical systems and nonlinear dynamic inversion by feeding back angular accelerations. Theoretically, feedback of

  2. An Environment for Incremental Development of Distributed Extensible Asynchronous Real-time Systems

    Ames, Charles K.; Burleigh, Scott; Briggs, Hugh C.; Auernheimer, Brent

    1996-01-01

    Incremental parallel development of distributed real-time systems is difficult. Architectural techniques and software tools developed at the Jet Propulsion Laboratory's (JPL's) Flight System Testbed make feasible the integration of complex systems in various stages of development.

  3. MUNIX and incremental stimulation MUNE in ALS patients and control subjects

    Furtula, Jasna; Johnsen, Birger; Christensen, Peter Broegger

    2013-01-01

    This study compares the new Motor Unit Number Estimation (MUNE) technique, MUNIX, with the more common incremental stimulation MUNE (IS-MUNE) with respect to reproducibility in healthy subjects and as potential biomarker of disease progression in patients with ALS....

  4. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  5. Unified performance analysis of hybrid-ARQ with incremental redundancy over free-space optical channels

    Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we carry out a unified performance analysis of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) from an information theoretic perspective over a point-to-point free-space optical (FSO) system. First, we

  6. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  7. The Boundary Between Planning and Incremental Budgeting: Empirical Examination in a Publicly-Owned Corporation

    S. K. Lioukas; D. J. Chambers

    1981-01-01

    This paper is a study within the field of public budgeting. It focuses on the capital budget, and it attempts to model and analyze the capital budgeting process using a framework previously developed in the literature of incremental budgeting. Within this framework the paper seeks to determine empirically whether the movement of capital expenditure budgets can be represented as the routine application of incremental adjustments over an existing base of allocations and whether further, forward...

  8. Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)

    2016-03-01

    2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Secretary of Defense PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be...Date Assigned: Program Information Program Name Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) DoD Component Army Responsible

  9. Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A...Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A) DoD Component Air Force Responsible Office Program...APB) dated March 9, 2015 DCAPES Inc 2A 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments

  10. Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B...Information Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B) DoD Component Air Force Responsible Office...been established. DCAPES Inc 2B 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments (DCAPES) is

  11. Applying CLSM to increment core surfaces for histometric analyses: A novel advance in quantitative wood anatomy

    Wei Liang; Ingo Heinrich; Gerhard Helle; I. Dorado Liñán; T. Heinken

    2013-01-01

    A novel procedure has been developed to conduct cell structure measurements on increment core samples of conifers. The procedure combines readily available hardware and software equipment. The essential part of the procedure is the application of a confocal laser scanning microscope (CLSM) which captures images directly from increment cores surfaced with the advanced WSL core-microtome. Cell wall and lumen are displayed with a strong contrast due to the monochrome black and green nature of th...

  12. Nature-inspired optimization algorithms

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  13. VISUALIZATION OF PAGERANK ALGORITHM

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  14. Parallel sorting algorithms

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  15. Quantum algorithm for association rules mining

    Yu, Chao-Hua; Gao, Fei; Wang, Qing-Le; Wen, Qiao-Yan

    2016-10-01

    Association rules mining (ARM) is one of the most important problems in knowledge discovery and data mining. Given a transaction database that has a large number of transactions and items, the task of ARM is to acquire consumption habits of customers by discovering the relationships between itemsets (sets of items). In this paper, we address ARM in the quantum settings and propose a quantum algorithm for the key part of ARM, finding frequent itemsets from the candidate itemsets and acquiring their supports. Specifically, for the case in which there are Mf(k ) frequent k -itemsets in the Mc(k ) candidate k -itemsets (Mf(k )≤Mc(k ) ), our algorithm can efficiently mine these frequent k -itemsets and estimate their supports by using parallel amplitude estimation and amplitude amplification with complexity O (k/√{Mc(k )Mf(k ) } ɛ ) , where ɛ is the error for estimating the supports. Compared with the classical counterpart, i.e., the classical sampling-based algorithm, whose complexity is O (k/Mc(k ) ɛ2) , our quantum algorithm quadratically improves the dependence on both ɛ and Mc(k ) in the best case when Mf(k )≪Mc(k ) and on ɛ alone in the worst case when Mf(k )≈Mc(k ) .

  16. A review of single-sample-based models and other approaches for radiocarbon dating of dissolved inorganic carbon in groundwater

    Han, L. F; Plummer, Niel

    2016-01-01

    Numerous methods have been proposed to estimate the pre-nuclear-detonation 14C content of dissolved inorganic carbon (DIC) recharged to groundwater that has been corrected/adjusted for geochemical processes in the absence of radioactive decay (14C0) - a quantity that is essential for estimation of radiocarbon age of DIC in groundwater. The models/approaches most commonly used are grouped as follows: (1) single-sample-based models, (2) a statistical approach based on the observed (curved) relationship between 14C and δ13C data for the aquifer, and (3) the geochemical mass-balance approach that constructs adjustment models accounting for all the geochemical reactions known to occur along a groundwater flow path. This review discusses first the geochemical processes behind each of the single-sample-based models, followed by discussions of the statistical approach and the geochemical mass-balance approach. Finally, the applications, advantages and limitations of the three groups of models/approaches are discussed.The single-sample-based models constitute the prevailing use of 14C data in hydrogeology and hydrological studies. This is in part because the models are applied to an individual water sample to estimate the 14C age, therefore the measurement data are easily available. These models have been shown to provide realistic radiocarbon ages in many studies. However, they usually are limited to simple carbonate aquifers and selection of model may have significant effects on 14C0 often resulting in a wide range of estimates of 14C ages.Of the single-sample-based models, four are recommended for the estimation of 14C0 of DIC in groundwater: Pearson's model, (Ingerson and Pearson, 1964; Pearson and White, 1967), Han & Plummer's model (Han and Plummer, 2013), the IAEA model (Gonfiantini, 1972; Salem et al., 1980), and Oeschger's model (Geyh, 2000). These four models include all processes considered in single-sample-based models, and can be used in different ranges of

  17. A System to Derive Optimal Tree Diameter Increment Models from the Eastwide Forest Inventory Data Base (EFIDB)

    Don C. Bragg

    2002-01-01

    This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...

  18. Estimation of incremental reactivities for multiple day scenarios: an application to ethane and dimethyoxymethane

    Stockwell, William R.; Geiger, Harald; Becker, Karl H.

    Single-day scenarios are used to calculate incremental reactivities by definition (Carter, J. Air Waste Management Assoc. 44 (1994) 881-899.) but even unreactive organic compounds may have a non-negligible effect on ozone concentrations if multiple-day scenarios are considered. The concentration of unreactive compounds and their products may build up over a multiple-day period and the oxidation products may be highly reactive or highly unreactive affecting the overall incremental reactivity of the organic compound. We have developed a method for calculating incremental reactivities for multiple days based on a standard scenario for polluted European conditions. This method was used to estimate maximum incremental reactivities (MIR) and maximum ozone incremental reactivities (MOIR) for ethane and dimethyoxymethane for scenarios ranging from 1 to 6 days. It was found that the incremental reactivities increased as the length of the simulation period increased. The MIR of ethane increased faster than the value for dimethyoxymethane as the scenarios became longer. The MOIRs of ethane and dimethyoxymethane increased but the change was more modest for scenarios longer than 3 days. MOIRs of both volatile organic compounds were equal within the uncertainties of their chemical mechanisms by the 5 day scenario. These results show that dimethyoxymethane has an ozone forming potential on a per mass basis that is only somewhat greater than ethane if multiple-day scenarios are considered.

  19. Calculation of the increment reduction in spruce stands by charcoal smoke

    Guede, J

    1954-01-01

    Chronic damage to spruce trees by charcoal smoke, often hardly noticeable from outward appearance but causing marked reductions of wood increment can be determined by means of a calculation by increment cores. Sulfurous acid anhydride causes the closure of the stomates of needles by which the circulation of water is checked. The assimilation and the wood increment are reduced. The cores are taken from uninjured trees belonging to the dominant class. These trees are liable to irregular variations in the trend of growth only by atmospheric influences and disturbances in the circulation of water. The decrease of increment of a stand can be judged by the trend of growth of the basal area of sample trees. Two methods are applied: in the first method, the difference between the mean total increment before the damage has been caused and that after it is calculated by the yield table in deriving the site quality classes from the basal area growth of dominant stems. This is possible by using the mean diameter of each age class and the frequency curve of basal area for each site class. In the other method, the reduction of basal area increment of sample trees is measured directly. The total reduction of a stand can be judged by the share of the dominant class of stem in the total current growth of the basal area of a sound stand and by the percent of reduction of the sample trees.

  20. Cost-effectiveness of available treatment options for patients suffering from severe COPD in the UK: a fully incremental analysis

    Hertel N

    2012-03-01

    Full Text Available Nadine Hertel1, Robert W Kotchie1, Yevgeniy Samyshkin1, Matthew Radford1, Samantha Humphreys2, Kevin Jameson21IMS Consulting Group, London, UK; 2MSD Ltd, Hoddesdon, UKPurpose: Frequent exacerbations which are both costly and potentially life-threatening are a major concern to patients with chronic obstructive pulmonary disease (COPD, despite the availability of several treatment options. This study aimed to assess the lifetime costs and outcomes associated with alternative treatment regimens for patients with severe COPD in the UK setting.Patients and methods: A Markov cohort model was developed to predict lifetime costs, outcomes, and cost-effectiveness of various combinations of a long-acting muscarinic antagonist (LAMA, a long-acting beta agonist (LABA, an inhaled corticosteroid (ICS, and roflumilast in a fully incremental analysis. Patients willing and able to take ICS, and those refusing or intolerant to ICS were analyzed separately. Efficacy was expressed as relative rate ratios of COPD exacerbation associated with alternative treatment regimens, taken from a mixed treatment comparison. The analysis was conducted from the UK National Health Service (NHS perspective. Parameter uncertainty was explored using one-way and probabilistic sensitivity analysis.Results: Based on the results of the fully incremental analysis a cost-effectiveness frontier was determined, indicating those treatment regimens which represent the most cost-effective use of NHS resources. For ICS-tolerant patients the cost-effectiveness frontier suggested LAMA as initial treatment. Where patients continue to exacerbate and additional therapy is required, LAMA + LABA/ICS can be a cost-effective option, followed by LAMA + LABA/ICS + roflumilast (incremental cost-effectiveness ratio [ICER] versus LAMA + LABA/ICS: £16,566 per quality-adjusted life-year [QALY] gained. The ICER in ICS-intolerant patients, comparing LAMA + LABA + roflumilast versus LAMA + LABA, was £13

  1. Modified Clipped LMS Algorithm

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  2. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  3. Semioptimal practicable algorithmic cooling

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  4. CALCULATION ALGORITHM TRUSS UNDER CRANE BEAMS

    N. K. Akaev1

    2016-01-01

    Full Text Available Aim.The task of reducing the deflection and increase the rigidity of single-span beams are made. In the article the calculation algorithm for truss crane girders is determined.Methods. To identify the internal effort required for the selection of cross section elements the design uses the Green's function.Results. It was found that the simplest truss system reduces deflection and increases the strength of design. The upper crossbar is subjected not only to bending and shear and compression work due to tightening tension. Preliminary determination of the geometrical characteristics of the crane farms elements are offered to make a comparison with previous similar configuration of his farms, using a simple approximate calculation methods.Conclusion.The method of sequential movements (incrementally the two bridge cranes along the length of the upper crossbar truss beams is suggested. We give the corresponding formulas and conditions of safety.

  5. Introduction to Evolutionary Algorithms

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  6. Recursive forgetting algorithms

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  7. Explaining algorithms using metaphors

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  8. Algorithms in Algebraic Geometry

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  9. Shadow algorithms data miner

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  10. Spectral Decomposition Algorithm (SDA)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  11. Quick fuzzy backpropagation algorithm.

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  12. Portfolios of quantum algorithms.

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  13. The global kernel k-means algorithm for clustering in feature space.

    Tzortzis, Grigorios F; Likas, Aristidis C

    2009-07-01

    Kernel k-means is an extension of the standard k -means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k -means with random restarts.

  14. Rolling scheduling of electric power system with wind power based on improved NNIA algorithm

    Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.

    2017-11-01

    This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.

  15. Stem analysis program (GOAP for evaluating of increment and growth data at individual tree

    Gafura Aylak Özdemir

    2016-07-01

    Full Text Available Stem analysis is a method evaluating in a detailed way data of increment and growth of individual tree at the past periods and widely used in various forestry disciplines. Untreated data of stem analysis consist of annual ring count and measurement procedures performed on cross sections taken from individual tree by section method. The evaluation of obtained this untreated data takes quite some time. Thus, a computer software was developed in this study to quickly and efficiently perform stem analysis. This computer software developed to evaluate untreated data of stem analysis as numerical and graphical was programmed as macro by utilizing Visual Basic for Application feature of MS Excel 2013 program currently the most widely used. In developed this computer software, growth height model is formed from two different approaches, individual tree volume depending on section method, cross-sectional area, increments of diameter, height and volume, volume increment percent and stem form factor at breast height are calculated depending on desired period lengths. This calculated values are given as table. Development of diameter, height, volume, increments of these variables, volume increment percent and stem form factor at breast height according to periodic age are given as chart. Stem model showing development of diameter, height and shape of individual tree in the past periods also can be taken from computer software as chart.

  16. Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-09-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.

  17. An Automated Processing Algorithm for Flat Areas Resulting from DEM Filling and Interpolation

    Xingwei Liu

    2017-11-01

    Full Text Available Correction of digital elevation models (DEMs for flat areas is a critical process for hydrological analyses and modeling, such as the determination of flow directions and accumulations, and the delineation of drainage networks and sub-basins. In this study, a new algorithm is proposed for flat correction/removal. It uses the puddle delineation (PD program to identify depressions (including their centers and overflow/spilling thresholds, compute topographic characteristics, and further fill the depressions. Three different levels of elevation increments are used for flat correction. The first and second level of increments create flows toward the thresholds and centers of the filled depressions or flats, while the third level of small random increments is introduced to cope with multiple threshold conditions. A set of artificial surfaces and two real-world landscapes were selected to test the new algorithm. The results showed that the proposed method was not limited by the shapes, the number of thresholds, and the surrounding topographic conditions of flat areas. Compared with the traditional methods, the new algorithm simplified the flat correction procedure and reduced the final elevation increments by 5.71–33.33%. This can be used to effectively remove/correct topographic flats and create flat-free DEMs.

  18. Algorithm 426 : Merge sort algorithm [M1

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  19. Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).

    Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H

    2015-01-01

    This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.

  20. Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy

    Tarun K. Sen

    2011-11-01

    Full Text Available Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy Abstract Innovation in information technology is a primary driver for growth in developed economies. Research indicates that countries go through three stages in the adoption of innovation strategies: buying innovation through global trade, incremental innovation from other countries by enhancing efficiency, and, at the most developed stage, radically innovating independently for competitive advantage. The first two stages of innovation maturity depend more on cross-border trade than the third stage. In this paper, we find that IT professionals in in an emerging economy such as India believe in radical innovation over incremental innovation (adaptation as a growth strategy, even though competitive advantage may rest in adaptation. The results of the study report the preference for innovation strategies among IT professionals in India and its implications for other rapidly growing emerging economies.

  1. Determining frustum depth of 304 stainless steel plates with various diameters and thicknesses by incremental forming

    Golabi, Sa' id [University of Kashan, Kashan (Iran, Islamic Republic of); Khazaali, Hossain [Bu-Ali Sina University, Hamedan (Iran, Islamic Republic of)

    2014-08-15

    Nowadays incremental forming is more popular because of its flexibility and cost saving. However, no engineering data is available for manufacturers for forming simple shapes like a frustum by incremental forming, and either expensive experimental tests or finite element analysis (FEA) should be employed to determine the depth of a frustum considering: thickness, material, cone diameter, wall angle, feed rate, tool diameter, etc. In this study, finite element technique, confirmed by experimental study, was employed for developing applicable curves for determining the depth of frustums made from 304 stainless steel (SS304) sheet with various cone angles, thicknesses from 0.3 to 1 mm and major diameters from 50 to 200 mm using incremental forming. Using these curves, the frustum angle and its depth knowing its thickness and major diameter can be predicted. The effects of feed rate, vertical pitch and tool diameter on frustum depth and surface quality were also addressed in this study.

  2. EFFECT OF COST INCREMENT DISTRIBUTION PATTERNS ON THE PERFORMANCE OF JIT SUPPLY CHAIN

    Ayu Bidiawati J.R

    2008-01-01

    Full Text Available Cost is an important consideration in supply chain (SC optimisation. This is due to emphasis placed on cost reduction in order to optimise profit. Some researchers use cost as one of their performance measures and others propose ways of accurately calculating cost. As product moves across SC, the product cost also increases. This paper studied the effect of cost increment distribution patterns on the performance of a JIT Supply Chain. In particular, it is necessary to know if inventory allocation across SC needs to be modified to accommodate different cost increment distribution patterns. It was found that funnel is still the best card distribution pattern for JIT-SC regardless the cost increment distribution patterns used.

  3. Incremental Validity of the DSM-5 Section III Personality Disorder Traits With Respect to Psychosocial Impairment.

    Simms, Leonard J; Calabrese, William R

    2016-02-01

    Traditional personality disorders (PDs) are associated with significant psychosocial impairment. DSM-5 Section III includes an alternative hybrid personality disorder (PD) classification approach, with both type and trait elements, but relatively little is known about the impairments associated with Section III traits. Our objective was to study the incremental validity of Section III traits--compared to normal-range traits, traditional PD criterion counts, and common psychiatric symptomatology--in predicting psychosocial impairment. To that end, 628 current/recent psychiatric patients completed measures of PD traits, normal-range traits, traditional PD criteria, psychiatric symptomatology, and psychosocial impairments. Hierarchical regressions revealed that Section III PD traits incrementally predicted psychosocial impairment over normal-range personality traits, PD criterion counts, and common psychiatric symptomatology. In contrast, the incremental effects for normal-range traits, PD symptom counts, and common psychiatric symptomatology were substantially smaller than for PD traits. These findings have implications for PD classification and the impairment literature more generally.

  4. The balanced scorecard: an incremental approach model to health care management.

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  5. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  6. Incremental net social benefit associated with using nuclear-fueled power plants

    Maoz, I.

    1976-12-01

    The incremental net social benefit (INSB) resulting from nuclear-fueled, rather than coal-fired, electric power generation is assessed. The INSB is defined as the difference between the 'incremental social benefit' (ISB)--caused by the cheaper technology of electric power generation, and the 'incremental social cost' (ISC)--associated with an increased power production, which is induced by cheaper technology. Section 2 focuses on the theoretical and empirical problems associated with the assessment of the long-run price elasticity of the demand for electricity, and the theoretical-econometric considerations that lead to the reasonable estimates of price elasticities of demand from those provided by recent empirical studies. Section 3 covers the theoretical and empirical difficulties associated with the construction of the long-run social marginal cost curves (LRSMC) of electricity. Sections 4 and 5 discuss the assessment methodology and provide numerical examples for the calculation of the INSB resulting from nuclear-fueled power generation

  7. Composite Differential Search Algorithm

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  8. Algorithms and Their Explanations

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  9. Finite lattice extrapolation algorithms

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  10. Recursive automatic classification algorithms

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  11. Graph Colouring Algorithms

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  12. 8. Algorithm Design Techniques

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  13. OXYGEN UPTAKE KINETICS DURING INCREMENTAL- AND DECREMENTAL-RAMP CYCLE ERGOMETRY

    Fadıl Özyener

    2011-09-01

    Full Text Available The pulmonary oxygen uptake (VO2 response to incremental-ramp cycle ergometry typically demonstrates lagged-linear first-order kinetics with a slope of ~10-11 ml·min-1·W-1, both above and below the lactate threshold (ӨL, i.e. there is no discernible VO2 slow component (or "excess" VO2 above ӨL. We were interested in determining whether a reverse ramp profile would yield the same response dynamics. Ten healthy males performed a maximum incremental -ramp (15-30 W·min-1, depending on fitness. On another day, the work rate (WR was increased abruptly to the incremental maximum and then decremented at the same rate of 15-30 W.min-1 (step-decremental ramp. Five subjects also performed a sub-maximal ramp-decremental test from 90% of ӨL. VO2 was determined breath-by-breath from continuous monitoring of respired volumes (turbine and gas concentrations (mass spectrometer. The incremental-ramp VO2-WR slope was 10.3 ± 0.7 ml·min-1·W-1, whereas that of the descending limb of the decremental ramp was 14.2 ± 1.1 ml·min-1·W-1 (p < 0.005. The sub-maximal decremental-ramp slope, however, was only 9. 8 ± 0.9 ml·min-1·W-1: not significantly different from that of the incremental-ramp. This suggests that the VO2 response in the supra-ӨL domain of incremental-ramp exercise manifest not actual, but pseudo, first-order kinetics

  14. Geometric approximation algorithms

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  15. Group leaders optimization algorithm

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  16. Fast geometric algorithms

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  17. Totally parallel multilevel algorithms

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  18. Governance by algorithms

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  19. Where genetic algorithms excel.

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  20. Network-Oblivious Algorithms

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  1. Maximal power output during incremental exercise by resistance and endurance trained athletes.

    Sakthivelavan, D S; Sumathilatha, S

    2010-01-01

    This study was aimed at comparing the maximal power output by resistance trained and endurance trained athletes during incremental exercise. Thirty male athletes who received resistance training (Group I) and thirty male athletes of similar age group who received endurance training (Group II) for a period of more than 1 year were chosen for the study. Physical parameters were measured and exercise stress testing was done on a cycle ergometer with a portable gas analyzing system. The maximal progressive incremental cycle ergometer power output at peak exercise and carbon dioxide production at VO2max were measured. Highly significant (P biofeedback and perk up the athlete's performance.

  2. BMI and BMI SDS in childhood: annual increments and conditional change

    Brannsether-Ellingsen, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Juliusson, Petur Benedikt

    2016-01-01

    Background: Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim: To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods: The distributions of 1-year increments of BMI (kg/m2) and BMI SDS are summarised by...

  3. Learning in Different Modes: The Interaction Between Incremental and Radical Change

    Petersen, Anders Hedegaard; Boer, Harry; Gertsen, Frank

    2004-01-01

    The objective of the study presented in this article is to contribute to the development of theory on continuous innovation, i.e. the combination of operationally effective exploitation and strategically flexible exploration. A longitudinal case study is presented of the interaction between...... incremental and radical change in Danish company, observed through the lens of organizational learning. The radical change process is described in five phases, each of which had its own effects on incremental change initiatives in the company. The research identified four factors explaining these effects, all...

  4. A program for the numerical control of a pulse increment system

    Gray, D.C.

    1963-08-21

    This report will describe the important features of the development of magnetic tapes for the numerical control of a pulse-increment system consisting of a modified Gorton lathe and its associated control unit developed by L. E. Foley of Equipment Development Service, Engineering Services, General Electric Co., Schenectady, N.Y. Included is a description of CUPID (Control and Utilization of Pulse Increment Devices), a FORTRAN program for the design of these tapes on the IBM 7090 computer, and instructions for its operation.

  5. Incremental Approach to the Technology of Test Design for Industrial Projects

    P. D. Drobintsev

    2014-01-01

    Full Text Available The paper presents an approach to effort reduction in developing test suites for industrial software products based on the incremental technology. The main problems to be solved by the incremental technology are full automation design of test scenarios and significant reducing of test explosion. The proposed approach provides solutions to the mentioned problems through joint co-working of a designer and a customer, through the integration of symbolic verification with the automatic generation of test suites; through the usage of an efficient technology with the toolset VRS/TAT.

  6. Incremental electrohydraulic forming - A new approach for the manufacture of structured multifunctional sheet metal blanks

    Djakow, Eugen; Springer, Robert; Homberg, Werner; Piper, Mark; Tran, Julian; Zibart, Alexander; Kenig, Eugeny

    2017-10-01

    Electrohydraulic Forming (EHF) processes permit the production of complex, sharp-edged geometries even when high-strength materials are used. Unfortunately, the forming zone is often limited as compared to other sheet metal forming processes. The use of a special industrial-robot-based tool setup and an incremental process strategy could provide a promising solution for this problem. This paper describes such an innovative approach using an electrohydraulic incremental forming machine, which can be employed to manufacture the large multifunctional and complex part geometries in steel, aluminium, magnesium and reinforced plastic that are employed in lightweight constructions or heating elements.

  7. Algorithms in Singular

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  8. A New Modified Firefly Algorithm

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  9. Magnet sorting algorithms

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  10. Enthalpy increment measurements of Sr3Zr2O7(s) and Sr4Zr3O10(s)

    Banerjee, A.; Dash, S.; Prasad, R.; Venugopal, V.

    1998-01-01

    Enthalpy increment measurements on Sr 3 Zr 2 O 7 (s) and Sr 4 Zr 3 O 10 (s) were carried out using a Calvet micro-calorimeter. The enthalpy increment values were least squares analyzed with the constraints that H 0 (T)-H 0 (298.15 K) at 298.15 K equals to zero and C p 0 (298.15 K) equals to the estimated value. The dependence of enthalpy increment with temperature is given. (orig.)

  11. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  12. Noise Reduction Effect of Multiple-Sampling-Based Signal-Readout Circuits for Ultra-Low Noise CMOS Image Sensors

    Shoji Kawahito

    2016-11-01

    Full Text Available This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs. This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC. The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median: 0.29 e−rms when compared with the CMS gain of two (2.4 e−rms, or 16 (1.1 e−rms.

  13. The Trait Emotional Intelligence Questionnaire: Internal Structure, Convergent, Criterion, and Incremental Validity in an Italian Sample

    Andrei, Federica; Smith, Martin M.; Surcinelli, Paola; Baldaro, Bruno; Saklofske, Donald H.

    2016-01-01

    This study investigated the structure and validity of the Italian translation of the Trait Emotional Intelligence Questionnaire. Data were self-reported from 227 participants. Confirmatory factor analysis supported the four-factor structure of the scale. Hierarchical regressions also demonstrated its incremental validity beyond demographics, the…

  14. One Size Does Not Fit All: Managing Radical and Incremental Creativity

    Gilson, Lucy L.; Lim, Hyoun Sook; D'Innocenzo, Lauren; Moye, Neta

    2012-01-01

    This research extends creativity theory by re-conceptualizing creativity as a two-dimensional construct (radical and incremental) and examining the differential effects of intrinsic motivation, extrinsic rewards, and supportive supervision on perceptions of creativity. We hypothesize and find two distinct types of creativity that are associated…

  15. Relating annual increments of the endangered Blanding's turtle plastron growth to climate.

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-05-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration.

  16. An Empirical Analysis of Incremental Capital Structure Decisions Under Managerial Entrenchment

    de Jong, A.; Veld, C.H.

    1998-01-01

    We study incremental capital structure decisions of Dutch companies. From 1977 to 1996 these companies have made 110 issues of public and private seasoned equity and 137 public issues of straight debt. Managers of Dutch companies are entrenched. For this reason a discrepancy exists between

  17. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  18. BMI and BMI SDS in childhood: annual increments and conditional change.

    Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt

    2017-02-01

    Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.

  19. Investigating Postgraduate College Admission Interviews: Generalizability Theory Reliability and Incremental Predictive Validity

    Arce-Ferrer, Alvaro J.; Castillo, Irene Borges

    2007-01-01

    The use of face-to-face interviews is controversial for college admissions decisions in light of the lack of availability of validity and reliability evidence for most college admission processes. This study investigated reliability and incremental predictive validity of a face-to-face postgraduate college admission interview with a sample of…

  20. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    Mabit, L.; Toloza, A.; Meusburger, K.; Alewell, C.; Iurian, A-R.; Owens, P.N.

    2014-01-01

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”

  1. Per tree estimates with n-tree distance sampling: an application to increment core data

    Thomas B. Lynch; Robert F. Wittwer

    2002-01-01

    Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...

  2. Literature Review of Data on the Incremental Costs to Design and Build Low-Energy Buildings

    Hunt, W. D.

    2008-05-14

    This document summarizes findings from a literature review into the incremental costs associated with low-energy buildings. The goal of this work is to help establish as firm an analytical foundation as possible for the Building Technology Program's cost-effective net-zero energy goal in the year 2025.

  3. Analogical reasoning: An incremental or insightful process? What cognitive and cortical evidence suggests.

    Antonietti, Alessandro; Balconi, Michela

    2010-06-01

    Abstract The step-by-step, incremental nature of analogical reasoning can be questioned, since analogy making appears to be an insight-like process. This alternative view of analogical thinking can be integrated in Speed's model, even though the alleged role played by dopaminergic subcortical circuits needs further supporting evidence.

  4. A diameter increment model for Red Fir in California and Southern Oregon

    K. Leroy Dolph

    1992-01-01

    Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....

  5. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    Mabit, L.; Toloza, A. [Soil and Water Management and Crop Nutrition Laboratory, IAEA, Seibersdorf (Austria); Meusburger, K.; Alewell, C. [Environmental Geosciences, Department of Environmental Sciences, University of Basel, Basel (Switzerland); Iurian, A-R. [Babes-Bolyai University, Faculty of Environmental Science and Engineering, Cluj-Napoca (Romania); Owens, P. N. [Environmental Science Program and Quesnel River Research Centre, University of Northern British Columbia, Prince George, British Columbia (Canada)

    2014-07-15

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”.

  6. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  7. Diagnostic value of triphasic incremental helical CT in early and progressive gastric carcinoma

    Gao Jianbo; Yan Xuehua; Li Mengtai; Guo Hua; Chen Xuejun; Guan Sheng; Zhang Xiefu; Li Shuxin; Yang Xiaopeng

    2001-01-01

    Objective: To investigate helical CT enhancement characteristics of gastric carcinoma, and the diagnostic value and preoperative staging of gastric carcinoma with triphasic incremental helical CT of the stomach with water-filling method. Methods: Both double-contrast barium examination and triphasic incremental helical CT of the stomach with water-filling method were performed in 46 patients with gastric carcinoma. Results: (1) Among these patients, normal gastric wall exhibited one layered structure in 18 patients, two or three layered structure in 28 patients in the arterial and portal venous phase. (2) Two cases of early stomach cancer showed marked enhancement in the arterial and portal venous phase and obvious attenuation of enhancement in the equilibrium phase. On the contrary, 32 of the 44 advanced gastric carcinoma was showed marked enhancement in the venous phase compared with the arterial phase ( t = 4.226, P < 0.05). (3) The total accuracy of triphasic incremental helical CT in determining TNM-staging was 81.0%. Conclusion: Different types of gastric carcinoma have different enhancement features. Triphases incremental helical CT is more accurate than conventional CT in the preoperative staging of gastric carcinoma

  8. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  9. Lead 210 and moss-increment dating of two Finnish Sphagnum hummocks

    El-Daoushy, F.

    1982-01-01

    A comparison is presented of 210 Pb dating data with mass-increment dates of selected peat material from Finland. The measurements of 210 Pb were carried out by determining the granddaughter product 210 Po by means of the isotope dilution. The ages in 210 Pb yr were calculated using the constant initial concentration and the constant rate of supply models. (U.K.)

  10. Successive 1-Month Weight Increments in Infancy Can Be Used to Screen for Faltering Linear Growth.

    Onyango, Adelheid W; Borghi, Elaine; de Onis, Mercedes; Frongillo, Edward A; Victora, Cesar G; Dewey, Kathryn G; Lartey, Anna; Bhandari, Nita; Baerug, Anne; Garza, Cutberto

    2015-12-01

    Linear growth faltering in the first 2 y contributes greatly to a high stunting burden, and prevention is hampered by the limited capacity in primary health care for timely screening and intervention. This study aimed to determine an approach to predicting long-term stunting from consecutive 1-mo weight increments in the first year of life. By using the reference sample of the WHO velocity standards, the analysis explored patterns of consecutive monthly weight increments among healthy infants. Four candidate screening thresholds of successive increments that could predict stunting were considered, and one was selected for further testing. The selected threshold was applied in a cohort of Bangladeshi infants to assess its predictive value for stunting at ages 12 and 24 mo. Between birth and age 12 mo, 72.6% of infants in the WHO sample tracked within 1 SD of their weight and length. The selected screening criterion ("event") was 2 consecutive monthly increments below the 15th percentile. Bangladeshi infants were born relatively small and, on average, tracked downward from approximately age 6 to strategy is effective, the estimated preventable proportion in the group who experienced the event would be 34% at 12 mo and 24% at 24 mo. This analysis offers an approach for frontline workers to identify children at risk of stunting, allowing for timely initiation of preventive measures. It opens avenues for further investigation into evidence-informed application of the WHO growth velocity standards. © 2015 American Society for Nutrition.

  11. Raising Cervical Cancer Awareness: Analysing the Incremental Efficacy of Short Message Service

    Lemos, Marina Serra; Rothes, Inês Areal; Oliveira, Filipa; Soares, Luisa

    2017-01-01

    Objective: To evaluate the incremental efficacy of a Short Message Service (SMS) combined with a brief video intervention in increasing the effects of a health education intervention for cervical cancer prevention, over and beyond a video-alone intervention, with respect to key determinants of health behaviour change--knowledge, motivation and…

  12. The Interpersonal Measure of Psychopathy: Construct and Incremental Validity in Male Prisoners

    Zolondek, Stacey; Lilienfeld, Scott O.; Patrick, Christopher J.; Fowler, Katherine A.

    2006-01-01

    The authors examined the construct and incremental validity of the Interpersonal Measure of Psychopathy (IM-P), a relatively new instrument designed to detect interpersonal behaviors associated with psychopathy. Observers of videotaped Psychopathy Checklist-Revised (PCL-R) interviews rated male prisoners (N = 93) on the IM-P. The IM-P correlated…

  13. Between structures and norms : Assessing tax increment financing for the Dutch spatial planning toolkit

    Root, Liz; Van Der Krabben, Erwin; Spit, Tejo

    2015-01-01

    The aim of the paper is to assess the institutional (mis)fit of tax increment financing for the Dutch spatial planning financial toolkit. By applying an institutionally oriented assessment framework, we analyse the interconnectivity of Dutch municipal finance and spatial planning structures and

  14. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  15. On critical cases in limit theory for stationary increments Lévy driven moving averages

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  16. TCAM-based High Speed Longest Prefix Matching with Fast Incremental Table Updates

    Rasmussen, Anders; Kragelund, A.; Berger, Michael Stübert

    2013-01-01

    and consequently a higher throughput of the network search engine, since the TCAM down time caused by incremental updates is eliminated. The LPM scheme is described in HDL for FPGA implementation and compared to an existing scheme for customized CAM circuits. The paper shows that the proposed scheme can process...

  17. Differentiating Major and Incremental New Product Development: The Effects of Functional and Numerical Workforce Flexibility

    Kok, R.A.W.; Ligthart, P.E.M.

    2014-01-01

    This study seeks to explain the differential effects of workforce flexibility on incremental and major new product development (NPD). Drawing on the resource-based theory of the firm, human resource management research, and innovation management literature, the authors distinguish two types of

  18. Real Time Implementation of Incremental Fuzzy Logic Controller for Gas Pipeline Corrosion Control

    Gopalakrishnan Jayapalan

    2014-01-01

    Full Text Available A robust virtual instrumentation based fuzzy incremental corrosion controller is presented to protect metallic gas pipelines. Controller output depends on error and change in error of the controlled variable. For corrosion control purpose pipe to soil potential is considered as process variable. The proposed fuzzy incremental controller is designed using a very simple control rule base and the most natural and unbiased membership functions. The proposed scheme is tested for a wide range of pipe to soil potential control. Performance comparison between the conventional proportional integral type and proposed fuzzy incremental controller is made in terms of several performance criteria such as peak overshoot, settling time, and rise time. Result shows that the proposed controller outperforms its conventional counterpart in each case. Designed controller can be taken in automode without waiting for initial polarization to stabilize. Initial startup curve of proportional integral controller and fuzzy incremental controller is reported. This controller can be used to protect any metallic structures such as pipelines, tanks, concrete structures, ship, and offshore structures.

  19. Gradient nanostructured surface of a Cu plate processed by incremental frictional sliding

    Hong, Chuanshi; Huang, Xiaoxu; Hansen, Niels

    2015-01-01

    The flat surface of a Cu plate was processed by incremental frictional sliding at liquid nitrogen temperature. The surface treatment results in a hardened gradient surface layer as thick as 1 mm in the Cu plate, which contains a nanostructured layer on the top with a boundary spacing of the order...

  20. A gradient surface produced by combined electroplating and incremental frictional sliding

    Yu, Tianbo; Hong, Chuanshi; Kitamura, K.

    2017-01-01

    A Cu plate was first electroplated with a Ni layer, with a thickness controlled to be between 1 and 2 mu m. The coated surface was then deformed by incremental frictional sliding with liquid nitrogen cooling. The combined treatment led to a multifunctional surface with a gradient in strain...

  1. The effects of the pine processionary moth on the increment of ...

    STORAGESEVER

    2009-05-18

    May 18, 2009 ... sycophanta L. (Coleoptera: Carabidae) used against the pine processionary moth (Thaumetopoea pityocampa Den. & Schiff.) (Lepidoptera: Thaumetopoeidae) in biological control. T. J. Zool. 30:181-185. Kanat M, Sivrikaya F (2005). Effect of the pine processionary moth on diameter increment of Calabrian ...

  2. A power-driven increment borer for sampling high-density tropical wood

    Krottenthaler, S.; Pitsch, P.; Helle, G.; Locosselli, G. M.; Ceccantini, G.; Altman, Jan; Svoboda, M.; Doležal, Jiří; Schleser, G.; Anhuf, D.

    2015-01-01

    Roč. 36, November (2015), s. 40-44 ISSN 1125-7865 R&D Projects: GA ČR GAP504/12/1952; GA ČR(CZ) GA14-12262S Institutional support: RVO:67985939 Keywords : tropical dendrochronology * tree sampling methods * increment cores Subject RIV: EF - Botanics Impact factor: 2.107, year: 2015

  3. Substructuring in the implicit simulation of single point incremental sheet forming

    Hadoush, A.; van den Boogaard, Antonius H.

    2009-01-01

    This paper presents a direct substructuring method to reduce the computing time of implicit simulations of single point incremental forming (SPIF). Substructuring is used to divide the finite element (FE) mesh into several non-overlapping parts. Based on the hypothesis that plastic deformation is

  4. Incremental Validity of the WJ III COG: Limited Predictive Effects beyond the GIA-E

    McGill, Ryan J.; Busse, R. T.

    2015-01-01

    This study is an examination of the incremental validity of Cattell-Horn-Carroll (CHC) broad clusters from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ III COG) for predicting scores on the Woodcock-Johnson III Tests of Achievement (WJ III ACH). The participants were children and adolescents, ages 6-18 (n = 4,722), drawn from the WJ…

  5. Feasibility of Incremental 2-Times Weekly Hemodialysis in Incident Patients With Residual Kidney Function

    Andrew I. Chin

    2017-09-01

    Discussion: More than 50% of incident HD patients with RKF have adequate kidney urea clearance to be considered for 2-times weekly HD. When additionally ultrafiltration volume and blood pressure stability are taken into account, more than one-fourth of the total cohort could optimally start HD in an incremental fashion.

  6. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. 29 CFR 825.205 - Increments of FMLA leave for intermittent or reduced schedule leave.

    2010-07-01

    ... intermittent leave or working a reduced leave schedule to commence or end work mid-way through a shift, such as... per week, but works only 20 hours a week under a reduced leave schedule, the employee's ten hours of... 29 Labor 3 2010-07-01 2010-07-01 false Increments of FMLA leave for intermittent or reduced...

  8. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  9. [Spatiotemporal variation of Populus euphratica's radial increment at lower reaches of Tarim River after ecological water transfer].

    An, Hong-Yan; Xu, Hai-Liang; Ye, Mao; Yu, Pu-Ji; Gong, Jun-Jun

    2011-01-01

    Taking the Populus euphratica at lower reaches of Tarim River as test object, and by the methods of tree dendrohydrology, this paper studied the spatiotemporal variation of P. euphratic' s branch radial increment after ecological water transfer. There was a significant difference in the mean radial increment before and after ecological water transfer. The radial increment after the eco-water transfer was increased by 125%, compared with that before the water transfer. During the period of ecological water transfer, the radial increment was increased with increasing water transfer quantity, and there was a positive correlation between the annual radial increment and the total water transfer quantity (R2 = 0.394), suggesting that the radial increment of P. euphratica could be taken as the performance indicator of ecological water transfer. After the ecological water transfer, the radial increment changed greatly with the distance to the River, i.e. , decreased significantly along with the increasing distance to the River (P = 0.007). The P. euphratic' s branch radial increment also differed with stream segment (P = 0.017 ), i.e. , the closer to the head-water point (Daxihaizi Reservoir), the greater the branch radial increment. It was considered that the limited effect of the current ecological water transfer could scarcely change the continually deteriorating situation of the lower reaches of Tarim River.

  10. Comprehensive preference optimization of an irreversible thermal engine using pareto based mutable smart bee algorithm and generalized regression neural network

    Mozaffari, Ahmad; Gorji-Bandpy, Mofid; Samadian, Pendar

    2013-01-01

    Optimizing and controlling of complex engineering systems is a phenomenon that has attracted an incremental interest of numerous scientists. Until now, a variety of intelligent optimizing and controlling techniques such as neural networks, fuzzy logic, game theory, support vector machines...... and stochastic algorithms were proposed to facilitate controlling of the engineering systems. In this study, an extended version of mutable smart bee algorithm (MSBA) called Pareto based mutable smart bee (PBMSB) is inspired to cope with multi-objective problems. Besides, a set of benchmark problems and four...... well-known Pareto based optimizing algorithms i.e. multi-objective bee algorithm (MOBA), multi-objective particle swarm optimization (MOPSO) algorithm, non-dominated sorting genetic algorithm (NSGA-II), and strength Pareto evolutionary algorithm (SPEA 2) are utilized to confirm the acceptable...

  11. Algorithms for parallel computers

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  12. Fluid structure coupling algorithm

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  13. Algorithmic phase diagrams

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  14. Diagnostic Algorithm Benchmarking

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  15. Inclusive Flavour Tagging Algorithm

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  16. Unsupervised learning algorithms

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  17. A maximal incremental effort alters tear osmolarity depending on the fitness level in military helicopter pilots.

    Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David

    2017-10-01

    Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Towards Reliable and Energy-Efficient Incremental Cooperative Communication for Wireless Body Area Networks.

    Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor

    2016-02-24

    In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.

  19. Endogenous-cue prospective memory involving incremental updating of working memory: an fMRI study.

    Halahalli, Harsha N; John, John P; Lukose, Ammu; Jain, Sanjeev; Kutty, Bindu M

    2015-11-01

    Prospective memory paradigms are conventionally classified on the basis of event-, time-, or activity-based intention retrieval. In the vast majority of such paradigms, intention retrieval is provoked by some kind of external event. However, prospective memory retrieval cues that prompt intention retrieval in everyday life are commonly endogenous, i.e., linked to a specific imagined retrieval context. We describe herein a novel prospective memory paradigm wherein the endogenous cue is generated by incremental updating of working memory, and investigated the hemodynamic correlates of this task. Eighteen healthy adult volunteers underwent functional magnetic resonance imaging while they performed a prospective memory task where the delayed intention was triggered by an endogenous cue generated by incremental updating of working memory. Working memory and ongoing task control conditions were also administered. The 'endogenous-cue prospective memory condition' with incremental working memory updating was associated with maximum activations in the right rostral prefrontal cortex, and additional activations in the brain regions that constitute the bilateral fronto-parietal network, central and dorsal salience networks as well as cerebellum. In the working memory control condition, maximal activations were noted in the left dorsal anterior insula. Activation of the bilateral dorsal anterior insula, a component of the central salience network, was found to be unique to this 'endogenous-cue prospective memory task' in comparison to previously reported exogenous- and endogenous-cue prospective memory tasks without incremental working memory updating. Thus, the findings of the present study highlight the important role played by the dorsal anterior insula in incremental working memory updating that is integral to our endogenous-cue prospective memory task.

  20. Influence of increment thickness on dentin bond strength and light transmission of composite base materials.

    Omran, Tarek A; Garoushi, Sufyan; Abdulmajeed, Aous A; Lassila, Lippo V; Vallittu, Pekka K

    2017-06-01

    Bulk-fill resin composites (BFCs) are gaining popularity in restorative dentistry due to the reduced chair time and ease of application. This study aimed to evaluate the influence of increment thickness on dentin bond strength and light transmission of different BFCs and a new discontinuous fiber-reinforced composite. One hundred eighty extracted sound human molars were prepared for a shear bond strength (SBS) test. The teeth were divided into four groups (n = 45) according to the resin composite used: regular particulate filler resin composite: (1) G-ænial Anterior [GA] (control); bulk-fill resin composites: (2) Tetric EvoCeram Bulk Fill [TEBF] and (3) SDR; and discontinuous fiber-reinforced composite: (4) everX Posterior [EXP]. Each group was subdivided according to increment thickness (2, 4, and 6 mm). The irradiance power through the material of all groups/subgroups was quantified (MARC® Resin Calibrator; BlueLight Analytics Inc.). Data were analyzed using two-way ANOVA followed by Tukey's post hoc test. SBS and light irradiance decreased as the increment's height increased (p composite used. EXP presented the highest SBS in 2- and 4-mm-thick increments when compared to other composites, although the differences were not statistically significant (p > 0.05). Light irradiance mean values arranged in descending order were (p composites. Discontinuous fiber-reinforced composite showed the highest value of curing light transmission, which was also seen in improved bonding strength to the underlying dentin surface. Discontinuous fiber-reinforced composite can be applied safely in bulks of 4-mm increments same as other bulk-fill composites, although, in 2-mm thickness, the investigated composites showed better performance.