OnlineMin: A Fast Strongly Competitive Randomized Paging Algorithm
Brodal, Gerth Stølting; Moruz, Gabriel; Negoescu, Andrei
2012-01-01
n the field of online algorithms paging is one of the most studied problems. For randomized paging algorithms a tight bound of H k on the competitive ratio has been known for decades, yet existing algorithms matching this bound have high running times. We present the first randomized paging...... approach that both has optimal competitiveness and selects victim pages in subquadratic time. In fact, if k pages fit in internal memory the best previous solution required O(k 2) time per request and O(k) space, whereas our approach takes also O(k) space, but only O(logk) time in the worst case per page...
An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier
Xiong Jintao
2016-01-01
Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.
On-line Viterbi Algorithm and Its Relationship to Random Walks
?rámek, Rastislav; Vina?, Tomá?
2007-01-01
In this paper, we introduce the on-line Viterbi algorithm for decoding hidden Markov models (HMMs) in much smaller than linear space. Our analysis on two-state HMMs suggests that the expected maximum memory used to decode sequence of length $n$ with $m$-state HMM can be as low as $\\Theta(m\\log n)$, without a significant slow-down compared to the classical Viterbi algorithm. Classical Viterbi algorithm requires $O(mn)$ space, which is impractical for analysis of long DNA sequences (such as complete human genome chromosomes) and for continuous data streams. We also experimentally demonstrate the performance of the on-line Viterbi algorithm on a simple HMM for gene finding on both simulated and real DNA sequences.
Efficient Online Learning via Randomized Rounding
Cesa-Bianchi, Nicolò
2011-01-01
Most online algorithms used in machine learning today are based on variants of mirror descent or follow-the-leader. In this paper, we present an online algorithm based on a completely different approach, which combines "random playout" and randomized rounding of loss subgradients. As an application of our approach, we provide the first computationally efficient online algorithm for collaborative filtering with norm-constrained matrices. As a second application, we solve an open question linking batch learning and transductive online learning.
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
Randomized robot navigation algorithms
Berman, P. [Penn State Univ., University Park, PA (United States); Blum, A. [Carnegie Mellon Univ., Pittsburgh, PA (United States); Fiat, A. [Tel-Aviv Univ. (Israel)] [and others
1996-12-31
We consider the problem faced by a mobile robot that has to reach a given target by traveling through an unmapped region in the plane containing oriented rectangular obstacles. We assume the robot has no prior knowledge about the positions or sizes of the obstacles, and acquires such knowledge only when obstacles are encountered. Our goal is to minimize the distance the robot must travel, using the competitive ratio as our measure. We give a new randomized algorithm for this problem whose competitive ratio is O(n4/9 log n), beating the deterministic {Omega}({radical}n) lower bound of [PY], and answering in the affirmative an open question of [BRS] (which presented an optimal deterministic algorithm). We believe the techniques introduced here may prove useful in other on-line situations in which information gathering is part of the on-line process.
An algorithm for online optimization of accelerators
Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States); Safranek, James [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Juhao [SLAC National Accelerator Lab., Menlo Park, CA (United States)
2013-10-01
We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.
Derandomization of Online Assignment Algorithms for Dynamic Graphs
Sahai, Ankur
2011-01-01
This paper analyzes different online algorithms for the problem of assigning weights to edges in a fully-connected bipartite graph that minimizes the overall cost while satisfying constraints. Edges in this graph may disappear and reappear over time. Performance of these algorithms is measured using simulations. This paper also attempts to derandomize the randomized online algorithm for this problem.
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Comparing Online Algorithms for Bin Packing Problems
Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard
2012-01-01
-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair.......The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst...
Comparing Online Algorithms for Bin Packing Problems
Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard
2012-01-01
The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....
Randomized Filtering Algorithms
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
Random hypergraphs and algorithmics
Andriamampianina, Tsiriniaina
2008-01-01
Hypergraphs are structures that can be decomposed or described; in other words they are recursively countable. Here, we get exact and asymptotic enumeration results on hypergraphs by mean of exponential generating functions. The number of hypergraph component is bounded, as a generalisation of Wright inequalities for graphs: the proof is a combinatorial understanding of the structure by inclusion exclusion. Asymptotic results are obtained, thanks to generating functions proofs are at the end very easy to read, through complex analysis by saddle point method. By this way, we characterized: - the components with a given number of vertices and of hyperedges by the expected size of a random hypermatching in these structures. - the random hypergraphs (evolving hyperedge by hyperedge) according to the expected number of hyperedges when the first cycle appears in the evolving structure. This work is an open road to further works on random hypergraphs such as threshold phenomenon, tools used here seem to be sufficien...
Selecting materialized views using random algorithm
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
Randomized approximate nearest neighbors algorithm.
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-09-20
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.
An Optimal Online Algorithm for Halfplane Intersection
WU Jigang; JI Yongchang; CHEN Guoliang
2000-01-01
The intersection of N halfplanes is a basic problem in computational geometry and computer graphics. The optimal offiine algorithm for this problem runs in time O(N log N). In this paper, an optimal online algorithm which runs also in time O(N log N) for this problem is presented. The main idea of the algorithm is to give a new definition for the left side of a given line, to assign the order for the points of a convex polygon, and then to use binary search method in an ordered vertex set. The data structure used in the algorithm is no more complex than array.
Online Performance-Improvement Algorithms
1994-08-01
L. Rivest, and M. Singh. BFS without teleportation. Unpublished Manuscript, 1994. 87 88 BIBLIOGRAPHY [121 R. Baeza -Yates, J. Culberson, and G. Rawlins...Foundations of Computer Science, pages 68-77, Oct. 1987. [59] T. Lozano- Perez and M. Wesley. An algorithm for planning collision-free paths among polygonal
Percon8 Algorithm for Random Number Generation
Dr. Mrs. Saylee Gharge
2014-05-01
Full Text Available In today’s technology savvy world, computer security holds a prime importance. Most computer security algorithms require some amount of random data for generating public and private keys, session keys or for other purposes. Random numbers are those numbers that occur in a sequence such that the future value of the sequence cannot be predicted based on present or past values. Random numbers find application in statistical analysis and probability theory. The many applications of randomness have led to the development of random number generating algorithms. These algorithms generate a sequence of random numbers either computationally or physically. In our proposed technique, we have implemented a random number generation algorithm combining two existing random number generation techniques viz. Mid square method and Linear Congruential Generator
Angelic Hierarchical Planning: Optimal and Online Algorithms
2008-12-06
restrict our attention to plans in I∗(Act, s0). Definition 2. ( Parr and Russell , 1998) A plan ah∗ is hierarchically optimal iff ah∗ = argmina∈I∗(Act,s0):T...Murdock, Dan Wu, and Fusun Yaman. SHOP2: An HTN planning system. JAIR, 20:379–404, 2003. Ronald Parr and Stuart Russell . Reinforcement Learning with...Angelic Hierarchical Planning: Optimal and Online Algorithms Bhaskara Marthi Stuart J. Russell Jason Wolfe Electrical Engineering and Computer
Analysis of Online Composite Mirror Descent Algorithm.
Lei, Yunwen; Zhou, Ding-Xuan
2017-03-01
We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.
Online Algorithms for Parallel Job Scheduling and Strip Packing
Hurink, Johann L.; Paulus, J.J.
We consider the online scheduling problem of parallel jobs on parallel machines, $P|online{−}list,m_j |C_{max}$. For this problem we present a 6.6623-competitive algorithm. This improves the best known 7-competitive algorithm for this problem. The presented algorithm also applies to the problem
Online algorithm for parallel job scheduling and strip packing
Hurink, Johann L.; Paulus, J.J.
We consider the online scheduling problem of parallel jobs on parallel machines, $P|\\mathrm{online − list},m_j |C_{\\mathrm{max}}$. For this problem we present a 6.6623-competitive algorithm. This improves the best known 7- competitive algorithm for this problem. The presented algorithm also applies
Randomized Algorithms for Matrices and Data
Mahoney, Michael W.
2012-03-01
This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this
Online Assignment Algorithms for Dynamic Bipartite Graphs
Sahai, Ankur
2011-01-01
This paper analyzes the problem of assigning weights to edges incrementally in a dynamic complete bipartite graph consisting of producer and consumer nodes. The objective is to minimize the overall cost while satisfying certain constraints. The cost and constraints are functions of attributes of the edges, nodes and online service requests. Novelty of this work is that it models real-time distributed resource allocation using an approach to solve this theoretical problem. This paper studies variants of this assignment problem where the edges, producers and consumers can disappear and reappear or their attributes can change over time. Primal-Dual algorithms are used for solving these problems and their competitive ratios are evaluated.
A comparison of performance measures for online algorithms
Boyar, Joan; Irani, Sandy; Larsen, Kim Skak
2009-01-01
This paper provides a systematic study of several recently suggested measures for online algorithms in the context of a specific problem, namely, the two server problem on three colinear points. Even though the problem is simple, it encapsulates a core challenge in online algorithms which is to b...
Asymptotic normality of randomly truncated stochastic algorithms
Lelong, Jérôme
2010-01-01
We study the convergence rate of randomly truncated stochastic algorithms, which consist in the truncation of the standard Robbins-Monro procedure on an increasing sequence of compact sets. Such a truncation is often required in practice to ensure convergence when standard algorithms fail because the expected-value function grows too fast. In this work, we give a self contained proof of a central limit theorem for this algorithm under local assumptions on the expected-value function, which are fairly easy to check in practice.
Asymptotic normality of randomly truncated stochastic algorithms
Lelong, Jérôme
2010-01-01
We study the convergence rate of randomly truncated stochastic algorithms, which consist in the truncation of the standard Robbins-Monro procedure on an increasing sequence of compact sets. Such a truncation is often required in practice to ensure convergence when standard algorithms fail because the expected-value function grows too fast. In this work, we give a self contained proof of a central limit theorem for this algorithm under local assumptions on the expected-value function, which are fairly easy to check in practice.
Coloring random graphs online without creating monochromatic subgraphs
Mütze, Torsten; Spöhel, Reto
2011-01-01
Consider the following random process: The vertices of a binomial random graph $G_{n,p}$ are revealed one by one, and at each step only the edges induced by the already revealed vertices are visible. Our goal is to assign to each vertex one from a fixed number $r$ of available colors immediately and irrevocably without creating a monochromatic copy of some fixed graph $F$ in the process. Our first main result is that for any $F$ and $r$, the threshold function for this problem is given by $p_0(F,r,n)=n^{-1/m_1^*(F,r)}$, where $m_1^*(F,r)$ denotes the so-called \\emph{online vertex-Ramsey density} of $F$ and $r$. This parameter is defined via a purely deterministic two-player game, in which the random process is replaced by an adversary that is subject to certain restrictions inherited from the random setting. Our second main result states that for any $F$ and $r$, the online vertex-Ramsey density $m_1^*(F,r)$ is a computable rational number. Our lower bound proof is algorithmic, i.e., we obtain polynomial-time...
Bayesian online algorithms for learning in discrete Hidden Markov Models
Alamino, Roberto C.; Caticha, Nestor
2008-01-01
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Algorithms for semi on-line multiprocessor scheduling problems
无
2002-01-01
In the classical multiprocessor scheduling problems, it is assumed that the problems are considered in off-line or on-line environment. But in practice, problems are often not really off-line or on-line but somehow in between. This means that, with respect to the on-line problem, some further information about the tasks is available, which allows the improvement of the performance of the best possible algorithms. Problems of this class are called semi on-line ones. The authors studied two semi on-line multiprocessor scheduling problems, in which, the total processing time of all tasks is known in advance, or all processing times lie in a given interval. They proposed approximation algorithms for minimizing the makespan and analyzed their performance guarantee. The algorithms improve the known results for 3 or more processor cases in the literature.
Randomized algorithms for matrices and data
Mahoney, Michael W
2011-01-01
Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis. Although this work had its origins within theoretical computer science, where researchers were interested in proving worst-case bounds, i.e., bounds without any assumptions at all on the input data, researchers from numerical linear algebra, statistics, applied mathematics, data analysis, and machine learning, as well as domain scientists have subsequently extended and applied these methods in important ways. Although this has been great for the development of the area and for the technology transfer of theoretical ideas into practical applications, this interdisciplinarity has thus far sometimes obscured the underlying simplicity and generality of the core ideas. This review will provide a detailed overview of recent work on randomized algorithms for matrix problems, with an emphasis on a few simple core ideas that underlie not...
Online semi-supervised learning: algorithm and application in metagenomics
S. Imangaliyev; B. Keijser; W. Crielaard; E. Tsivtsivadze
2013-01-01
As the amount of metagenomic data grows rapidly, online statistical learning algorithms are poised to play key role in metagenome analysis tasks. Frequently, data are only partially labeled, namely dataset contains partial information about the problem of interest. This work presents an algorithm an
Online Semi-Supervised Learning: Algorithm and Application in Metagenomics
Imangaliyev, S.; Keijser, B.J.F.; Crielaard, W.; Tsivtsivadze, E.
2013-01-01
As the amount of metagenomic data grows rapidly, online statistical learning algorithms are poised to play key rolein metagenome analysis tasks. Frequently, data are only partially labeled, namely dataset contains partial information about the problem of interest. This work presents an algorithm and
Online learning algorithm for ensemble of decision rules
Chikalov, Igor
2011-01-01
We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.
A new Hedging algorithm and its application to inferring latent random variables
Freund, Yoav
2008-01-01
We present a new online learning algorithm for cumulative discounted gain. This learning algorithm does not use exponential weights on the experts. Instead, it uses a weighting scheme that depends on the regret of the master algorithm relative to the experts. In particular, experts whose discounted cumulative gain is smaller (worse) than that of the master algorithm receive zero weight. We also sketch how a regret-based algorithm can be used as an alternative to Bayesian averaging in the context of inferring latent random variables.
Models and algorithms for stochastic online scheduling
Megow, N.; Uetz, Marc Jochen; Vredeveld, T.
We consider a model for scheduling under uncertainty. In this model, we combine the main characteristics of online and stochastic scheduling in a simple and natural way. Job processing times are assumed to be stochastic, but in contrast to traditional stochastic scheduling models, we assume that
Online Assignment Algorithms for Dynamic Bipartite Graphs
Sahai, Ankur
2011-01-01
This paper analyzes the problem of assigning weights to edges incrementally in a dynamic complete bipartite graph consisting of producer and consumer nodes. The objective is to minimize the overall cost while satisfying certain constraints. The cost and constraints are functions of attributes of the edges, nodes and online service requests. Novelty of this work is that it models real-time distributed resource allocation using an approach to solve this theoretical problem. This paper studies v...
Rent, Lease or Buy: Randomized Algorithms for Multislope Ski Rental
Lotker, Zvi; Rawitz, Dror
2008-01-01
In the Multislope Ski Rental problem, the user needs a certain resource for some unknown period of time. To use the resource, the user must subscribe to one of several options, each of which consists of a one-time setup cost (``buying price''), and cost proportional to the duration of the usage (``rental rate''). The larger the price, the smaller the rent. The actual usage time is determined by an adversary, and the goal of an algorithm is to minimize the cost by choosing the best option at any point in time. Multislope Ski Rental is a natural generalization of the classical Ski Rental problem (where the only options are pure rent and pure buy), which is one of the fundamental problems of online computation. The Multislope Ski Rental problem is an abstraction of many problems where online decisions cannot be modeled by just two options, e.g., power management in systems which can be shut down in parts. In this paper we study randomized algorithms for Multislope Ski Rental. Our results include the best possibl...
On-line EM algorithm for the normalized gaussian network.
Sato, M; Ishii, S
2000-02-01
A normalized gaussian network (NGnet) (Moody & Darken, 1989) is a network of local linear regression units. The model softly partitions the input space by normalized gaussian functions, and each local unit linearly approximates the output within the partition. In this article, we propose a new on-line EMalgorithm for the NGnet, which is derived from the batch EMalgorithm (Xu, Jordan, &Hinton 1995), by introducing a discount factor. We show that the on-line EM algorithm is equivalent to the batch EM algorithm if a specific scheduling of the discount factor is employed. In addition, we show that the on-line EM algorithm can be considered as a stochastic approximation method to find the maximum likelihood estimator. A new regularization method is proposed in order to deal with a singular input distribution. In order to manage dynamic environments, where the input-output distribution of data changes over time, unit manipulation mechanisms such as unit production, unit deletion, and unit division are also introduced based on probabilistic interpretation. Experimental results show that our approach is suitable for function approximation problems in dynamic environments. We also apply our on-line EM algorithm to robot dynamics problems and compare our algorithm with the mixtures-of-experts family.
An efficient and impartial online algorithm for kidney assignment network
无
2009-01-01
An online algorithm balancing the efficiency and equity principles is proposed for the kidney resource assignment when only the current patient and resource information is known to the assignment network. In the algorithm, the assignment is made according to the priority, which is calculated according to the efficiency principle and the equity principle. The efficiency principle is concerned with the post-transplantation immunity spending caused by the possible post-operation immunity rejection and patient’...
Near-Optimal Algorithms for Online Matrix Prediction
Hazan, Elad; Shalev-Shwartz, Shai
2012-01-01
In several online prediction problems of recent interest the comparison class is composed of matrices with bounded entries. For example, in the online max-cut problem, the comparison class is matrices which represent cuts of a given graph and in online gambling the comparison class is matrices which represent permutations over n teams. Another important example is online collaborative filtering in which a widely used comparison class is the set of matrices with a small trace norm. In this paper we isolate a property of matrices, which we call (beta,tau)-decomposability, and derive an efficient online learning algorithm, that enjoys a regret bound of O*(sqrt(beta tau T)) for all problems in which the comparison class is composed of (beta,tau)-decomposable matrices. By analyzing the decomposability of cut matrices, triangular matrices, and low trace-norm matrices, we derive near optimal regret bounds for online max-cut, online gambling, and online collaborative filtering. In particular, this resolves (in the af...
Online Feature Selection of Class Imbalance via PA Algorithm
Chao Han; Yun-Kun Tan; Jin-Hui Zhu; Yong Guo; Jian Chen; Qing-Yao Wu
2016-01-01
Imbalance classification techniques have been frequently applied in many machine learning application domains where the number of the majority (or positive) class of a dataset is much larger than that of the minority (or negative) class. Meanwhile, feature selection (FS) is one of the key techniques for the high-dimensional classification task in a manner which greatly improves the classification performance and the computational eﬃciency. However, most studies of feature selection and imbalance classification are restricted to off-line batch learning, which is not well adapted to some practical scenarios. In this paper, we aim to solve high-dimensional imbalanced classification problem accurately and eﬃciently with only a small number of active features in an online fashion, and we propose two novel online learning algorithms for this purpose. In our approach, a classifier which involves only a small and fixed number of features is constructed to classify a sequence of imbalanced data received in an online manner. We formulate the construction of such online learner into an optimization problem and use an iterative approach to solve the problem based on the passive-aggressive (PA) algorithm as well as a truncated gradient (TG) method. We evaluate the performance of the proposed algorithms based on several real-world datasets, and our experimental results have demonstrated the effectiveness of the proposed algorithms in comparison with the baselines.
FPGA Online Tracking Algorithm for the PANDA Straw Tube Tracker
Liang, Yutie; Ye, Hua; Galuska, Martin J.; Gessler, Thomas; Kuhn, Wolfgang; Lange, Jens Soren; Wagner, Milan N.; Liu, Zhen'an; Zhao, Jingzhou
2017-06-01
A novel FPGA based online tracking algorithm for helix track reconstruction in a solenoidal field, developed for the PANDA spectrometer, is described. Employing the Straw Tube Tracker detector with 4636 straw tubes, the algorithm includes a complex track finder, and a track fitter. Implemented in VHDL, the algorithm is tested on a Xilinx Virtex-4 FX60 FPGA chip with different types of events, at different event rates. A processing time of 7 $\\mu$s per event for an average of 6 charged tracks is obtained. The momentum resolution is about 3\\% (4\\%) for $p_t$ ($p_z$) at 1 GeV/c. Comparing to the algorithm running on a CPU chip (single core Intel Xeon E5520 at 2.26 GHz), an improvement of 3 orders of magnitude in processing time is obtained. The algorithm can handle severe overlapping of events which are typical for interaction rates above 10 MHz.
FPGA Online Tracking Algorithm for the PANDA Straw Tube Tracker
Liang, Yutie; Galuska, Martin J; Gessler, Thomas; Kühn, Wolfgang; Lange, Jens Sören; Wagner, Milan N; Liu, Zhen'an; Zhao, Jingzhou
2016-01-01
A novel FPGA based online tracking algorithm for helix track reconstruction in a solenoidal field, developed for the PANDA spectrometer, is described. Employing the Straw Tube Tracker detector with 4636 straw tubes, the algorithm includes a complex track finder, and a track fitter. Implemented in VHDL, the algorithm is tested on a Xilinx Virtex-4 FX60 FPGA chip with different types of events, at different event rates. A processing time of 7 $\\mu$s per event for an average of 6 charged tracks is obtained. The momentum resolution is about 3\\% (4\\%) for $p_t$ ($p_z$) at 1 GeV/c. Comparing to the algorithm running on a CPU chip (single core Intel Xeon E5520 at 2.26 GHz), an improvement of 3 orders of magnitude in processing time is obtained. The algorithm can handle severe overlapping of events which are typical for interaction rates above 10 MHz.
Online Feature Extraction Algorithms for Data Streams
Ozawa, Seiichi
Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.
Algorithmic Randomness as Foundation of Inductive Reasoning and Artificial Intelligence
Hutter, Marcus
2011-01-01
This article is a brief personal account of the past, present, and future of algorithmic randomness, emphasizing its role in inductive inference and artificial intelligence. It is written for a general audience interested in science and philosophy. Intuitively, randomness is a lack of order or predictability. If randomness is the opposite of determinism, then algorithmic randomness is the opposite of computability. Besides many other things, these concepts have been used to quantify Ockham's razor, solve the induction problem, and define intelligence.
On-line learning algorithms for locally recurrent neural networks.
Campolucci, P; Uncini, A; Piazza, F; Rao, B D
1999-01-01
This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.
Gradient maintenance: A new algorithm for fast online replanning
Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Li, X. Allen [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States)
2015-06-15
Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by
Online algorithms for optimal energy distribution in microgrids
Wang, Yu; Nelms, R Mark
2015-01-01
Presenting an optimal energy distribution strategy for microgrids in a smart grid environment, and featuring a detailed analysis of the mathematical techniques of convex optimization and online algorithms, this book provides readers with essential content on how to achieve multi-objective optimization that takes into consideration power subscribers, energy providers and grid smoothing in microgrids. Featuring detailed theoretical proofs and simulation results that demonstrate and evaluate the correctness and effectiveness of the algorithm, this text explains step-by-step how the problem can b
Randomized Speedup of the Bellman-Ford Algorithm
Bannister, Michael J
2011-01-01
We describe a variant of the Bellman-Ford algorithm for single-source shortest paths in graphs with negative edges but no negative cycles that randomly permutes the vertices and uses this randomized order to process the vertices within each pass of the algorithm. The modification reduces the worst-case expected number of relaxation steps of the algorithm, compared to the previously-best variant by Yen (1970), by a factor of 2/3 with high probability. We also use our high probability bound to add negative cycle detection to the randomized algorithm.
Research on Algorithm Recommended by Online Education for Big Data
Feng Tao
2015-01-01
Full Text Available “Big data” is becoming a hot topic in the Internet. The long tail problem of the massive online courses also becomes the biggest headache for operation team of online education. The manner in which the reader wants most courses show to be presented before the user is the key to improve the quality of online edu-cation. Personalized recommendation system is to discover the readers interests tendency based on the existing user data, project data, and interactive data, thus to provide personalized product recommendation for readers. This article is based on the two kinds of algorithms, namely the content and the collaborative filtering recommendation to propose an improved integration scheme, which can make good use of existing data to discover the useful knowledge for readers’ recommendation. The method firstly solves the sparsity problem in traditional collaborative filtering, and meanwhile we start from the global structure relation of course, to analyze the relationship between the reader and the course more comprehensively. The algorithm to improve the accuracy of recommendation from multiple angles, and provides a feasible method for precise recommendation of online educational video.
Decoherence in optimized quantum random-walk search algorithm
Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun
2015-08-01
This paper investigates the effects of decoherence generated by broken-link-type noise in the hypercube on an optimized quantum random-walk search algorithm. When the hypercube occurs with random broken links, the optimized quantum random-walk search algorithm with decoherence is depicted through defining the shift operator which includes the possibility of broken links. For a given database size, we obtain the maximum success rate of the algorithm and the required number of iterations through numerical simulations and analysis when the algorithm is in the presence of decoherence. Then the computational complexity of the algorithm with decoherence is obtained. The results show that the ultimate effect of broken-link-type decoherence on the optimized quantum random-walk search algorithm is negative. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).
White Noise in Quantum Random Walk Search Algorithm
MA Lei; DU Jiang-Feng; LI Yun; LI Hui; KWEK L. C.; OH C. H.
2006-01-01
@@ The quantum random walk is a possible approach to construct new quantum search algorithms. It has been shown by Shenvi et al. [Phys. Rev. A 67 (2003)52307] that a kind of algorithm can perform an oracle search on a database of N items with O(√N) calling to the oracle, yielding a speedup similar to other quantum search algorithms.
MATRIX ALGEBRA ALGORITHM OF STRUCTURE RANDOM RESPONSE NUMERICAL CHARACTERISTICS
无
2003-01-01
A new algorithm of structure random response numerical characteristics, named as matrix algebra algorithm of structure analysis is presented.Using the algorithm, structure random response numerical characteristics can easily be got by directly solving linear matrix equations rather than structure motion differential equations.Moreover, in order to solve the corresponding linear matrix equations, the numerical integration fast algorithm is presented.Then according to the results, dynamic design and life-span estimation can be done.Besides, the new algorithm can solve non-proportion damp structure response.
QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms
Ardjan Zwartjes
2016-10-01
Full Text Available In this work, we introduce QUEST (QUantile Estimation after Supervised Training, an adaptive classification algorithm for Wireless Sensor Networks (WSNs that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.
Online Algorithms for Adaptive Optimization in Heterogeneous Delay Tolerant Networks
Wissam Chahin
2013-12-01
Full Text Available Delay Tolerant Networks (DTNs are an emerging type of networks which do not need a predefined infrastructure. In fact, data forwarding in DTNs relies on the contacts among nodes which may possess different features, radio range, battery consumption and radio interfaces. On the other hand, efficient message delivery under limited resources, e.g., battery or storage, requires to optimize forwarding policies. We tackle optimal forwarding control for a DTN composed of nodes of different types, forming a so-called heterogeneous network. Using our model, we characterize the optimal policies and provide a suitable framework to design a new class of multi-dimensional stochastic approximation algorithms working for heterogeneous DTNs. Crucially, our proposed algorithms drive online the source node to the optimal operating point without requiring explicit estimation of network parameters. A thorough analysis of the convergence properties and stability of our algorithms is presented.
Non-Linguistic Vocal Event Detection Using Online Random
Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll
2014-01-01
Accurate detection of non-linguistic vocal events in social signals can have a great impact on the applicability of speech enabled interactive systems. In this paper, we investigate the use of random forest for vocal event detection. Random forest technique has been successfully employed in many...... areas such as object detection, face recognition, and audio event detection. This paper proposes to use online random forest technique for detecting laughter and filler and for analyzing the importance of various features for non-linguistic vocal event classification through permutation. The results...... show that according to the Area Under Curve measure the online random forest achieved 88.1% compared to 82.9% obtained by the baseline support vector machines for laughter classification and 86.8% to 83.6% for filler classification....
Miszczak, J. A.
2012-01-01
We present a new version of TRQS package for Mathematica computing system. The package allows harnessing quantum random number generators (QRNG) for investigating the statistical properties of quantum states. It implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new...
Topics in Randomized Algorithms for Numerical Linear Algebra
Holodnak, John T.
In this dissertation, we present results for three topics in randomized algorithms. Each topic is related to random sampling. We begin by studying a randomized algorithm for matrix multiplication that randomly samples outer products. We show that if a set of deterministic conditions is satisfied, then the algorithm can compute the exact product. In addition, we show probabilistic bounds on the two norm relative error of the algorithm. two norm relative error of the algorithm. In the second part, we discuss the sensitivity of leverage scores to perturbations. Leverage scores are scalar quantities that give a notion of importance to the rows of a matrix. They are used as sampling probabilities in many randomized algorithms. We show bounds on the difference between the leverage scores of a matrix and a perturbation of the matrix. In the last part, we approximate functions over an active subspace of parameters. To identify the active subspace, we apply an algorithm that relies on a random sampling scheme. We show bounds on the accuracy of the active subspace identification algorithm and construct an approximation to a function with 3556 parameters using a ten-dimensional active subspace.
Clonal Selection Algorithm Based Iterative Learning Control with Random Disturbance
Yuanyuan Ju
2013-01-01
Full Text Available Clonal selection algorithm is improved and proposed as a method to solve optimization problems in iterative learning control. And a clonal selection algorithm based optimal iterative learning control algorithm with random disturbance is proposed. In the algorithm, at the same time, the size of the search space is decreased and the convergence speed of the algorithm is increased. In addition a model modifying device is used in the algorithm to cope with the uncertainty in the plant model. In addition a model is used in the algorithm cope with the uncertainty in the plant model. Simulations show that the convergence speed is satisfactory regardless of whether or not the plant model is precise nonlinear plants. The simulation test verify the controlled system with random disturbance can reached to stability by using improved iterative learning control law but not the traditional control law.
Miszczak, J A
2012-01-01
We present a new version of TRQS package for Mathematica computing system. The package allows harnessing quantum random number generators (QRNG) for investigating the statistical properties of quantum states. It implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data.
Optimal Preemptive Online Algorithms for Scheduling with Known Largest Size on Two Uniform Machines
Yong HE; Yi Wei JIANG; Hao ZHOU
2007-01-01
In this paper, we consider the semi-online preemptive scheduling problem with known largest job sizes on two uniform machines. Our goal is to maximize the continuous period of time (starting from time zero) when both machines are busy, which is equivalent to maximizing the minimummachine completion time if idle time is not introduced. We design optimal deterministic semi-onlinealgorithms for every machine speed ratio s ∈ [1, ∞), and show that idle time is required to achieve the optimality during the assignment procedure of the algorithm for any s (s2 + 3s + 1)/(s2 + 2s + 1).The competitive ratio of the algorithms is (s2 + 3s + 1)/(s2 + 2s + 1), which matches the randomized lower bound for every s ≥ 1. Hence randomization does not help for the discussed preemptive scheduling problem.
Semi-Online Algorithms for Scheduling with Machine Cost
Yi-Wei Jiang; Yong He
2006-01-01
In this paper, we consider the following semi-online List Model problem with known total size. We are given a sequence of independent jobs with positive sizes, which must be assigned to be processed on machines. No machines are initially provided, and when a job is revealed the algorithm has the option to purchase new machines. By normalizing all job sizes and machine cost, we assume that the cost of purchasing one machine is 1. We further know the total size of all jobs in advance. The objective is to minimize the sum of the makespan and the number of machines to be purchased. Both non-preemptive and preemptive versions are considered. For the non-preemptive version, we present a new lower bound 6/5 which improves the known lower bound 1.161. For the preemptive version, we present an optimal semi-online algorithm with a competitive ratio of 1 in the case that the total size is not greater than 4, and an algorithm with a competitive ratio of 5/4 otherwise, while a lower bound 1.0957 is also presented for general case.
An efficient and impartial online algorithm for kidney assignment network
Yu-jue Wang; Jia-yin Wang; Pei-jia Tang; Yi-tuo Ye
2009-01-01
An online algorithm balancing the efficiency and equity principles is proposed for the kidney resource assignment when only the current patient and resource information is known to the assignment network. In the algorithm, the assignment is made according to the priority, which is calculated according to the efficiency principle and the equity principle. The efficiency principle is concerned with the post-transplantation- immunity spending caused by the possible post-operation immunity rejection and patient's mental depression due to the HLA mismatch. The equity principle is concerned with three other factors, namely the treatment spending incurred starting from the day of registering with the kidney assignment network, the post-operation immunity spending and the negative effects of waiting for kidney resources on the clinical efficiency. The competitive analysis conducted through computer simulation indicates that the efficiency competitive ratio is between 6. 29 and 10. 43 and the equity competitive ratio is between 1. 31 and 5. 21, demonstrating that the online algorithm is of great significance in application.
Online Optimal Controller Design using Evolutionary Algorithm with Convergence Properties
Yousef Alipouri
2014-06-01
Full Text Available Many real-world applications require minimization of a cost function. This function is the criterion that figures out optimally. In the control engineering, this criterion is used in the design of optimal controllers. Cost function optimization has difficulties including calculating gradient function and lack of information about the system and the control loop. In this article, for the first time, gradient memetic evolutionary programming is proposed for minimization of non-convex cost functions that have been defined in control engineering. Moreover, stability and convergence of the proposed algorithm are proved. Besides, it is modified to be used in online optimization. To achieve this, the sign of the gradient function is utilized. For calculating the sign of the gradient, there is no need to know the cost-function’s shape. The gradient functions are estimated by the algorithm. The proposed algorithm is used to design a PI controller for nonlinear benchmark system CSTR (Continuous Stirred Tank Reactor by online and off-line approaches.
Genetic algorithms as global random search methods
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Randomized Algorithms for Systems and Control: Theory and Applications
2008-05-01
IEIIT-CNR Randomized Algorithms for Systems and Control: Theory and Applications NATO LS Glasgow, Pamplona , Cleveland @RT 2008 Roberto Tempo IEIIT...Glasgow, Pamplona , Cleveland @RT 2008 roberto.tempo@polito.it IEIIT-CNR References R. Tempo, G. Calafiore and F. Dabbene, “Randomized Algorithms for...Analysis and Control of Uncertain Systems,” Springer-Verlag, London, 2005 R Tempo and H Ishii “Monte Carlo and Las Vegas NATO LS Glasgow, Pamplona , Cleveland
EXTREME: an online EM algorithm for motif discovery
Quang, Daniel; Xie, Xiaohui
2014-01-01
Motivation: Identifying regulatory elements is a fundamental problem in the field of gene transcription. Motif discovery—the task of identifying the sequence preference of transcription factor proteins, which bind to these elements—is an important step in this challenge. MEME is a popular motif discovery algorithm. Unfortunately, MEME’s running time scales poorly with the size of the dataset. Experiments such as ChIP-Seq and DNase-Seq are providing a rich amount of information on the binding preference of transcription factors. MEME cannot discover motifs in data from these experiments in a practical amount of time without a compromising strategy such as discarding a majority of the sequences. Results: We present EXTREME, a motif discovery algorithm designed to find DNA-binding motifs in ChIP-Seq and DNase-Seq data. Unlike MEME, which uses the expectation-maximization algorithm for motif discovery, EXTREME uses the online expectation-maximization algorithm to discover motifs. EXTREME can discover motifs in large datasets in a practical amount of time without discarding any sequences. Using EXTREME on ChIP-Seq and DNase-Seq data, we discover many motifs, including some novel and infrequent motifs that can only be discovered by using the entire dataset. Conservation analysis of one of these novel infrequent motifs confirms that it is evolutionarily conserved and possibly functional. Availability and implementation: All source code is available at the Github repository http://github.com/uci-cbcl/EXTREME. Contact: xhx@ics.uci.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24532725
IMPROVED RANDOMIZED ALGORITHM FOR THE EQUIVALENT 2-CATALOG SEGMENTATION PROBLEM
无
2005-01-01
An improved randomized algorithm of the equivalent 2-catalog segmentation problem is presented. The result obtained in this paper makes some progress to answer the open problem by analyze this algorithm with performance guarantee. A 0.6378-approximation for the equivalent 2-catalog segmentation problem is obtained.
Decoding Algorithms for Random Linear Network Codes
Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank
2011-01-01
achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...
Algorithmic learning in a random world
Vovk, Vladimir; Shafer, Glenn
2005-01-01
A new scientific monograph developing significant new algorithmic foundations in machine learning theory. Researchers and postgraduates in CS, statistics, and A.I. will find the book an authoritative and formal presentation of some of the most promising theoretical developments in machine learning.
GPU implementations of online track finding algorithms at PANDA
Herten, Andreas; Stockmanns, Tobias; Ritman, James [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH (Germany); Adinetz, Andrew; Pleiter, Dirk [Juelich Supercomputing Centre, Forschungszentrum Juelich GmbH (Germany); Kraus, Jiri [NVIDIA GmbH (Germany); Collaboration: PANDA-Collaboration
2014-07-01
The PANDA experiment is a hadron physics experiment that will investigate antiproton annihilation in the charm quark mass region. The experiment is now being constructed as one of the main parts of the FAIR facility. At an event rate of 2 . 10{sup 7}/s a data rate of 200 GB/s is expected. A reduction of three orders of magnitude is required in order to save the data for further offline analysis. Since signal and background processes at PANDA have similar signatures, no hardware-level trigger is foreseen for the experiment. Instead, a fast online event filter is substituting this element. We investigate the possibility of using graphics processing units (GPUs) for the online tracking part of this task. Researched algorithms are a Hough Transform, a track finder involving Riemann surfaces, and the novel, PANDA-specific Triplet Finder. This talk shows selected advances in the implementations as well as performance evaluations of the GPU tracking algorithms to be used at the PANDA experiment.
Following Car Algorithm With Multi Agent Randomized System
Mounir Gouiouez
2013-08-01
Full Text Available We present a new Following Car Algorithm in Microscopic Urban Traffic Models which integrates some real-life factors that need to be considered, such as the effect of random distributions in the car speed, acceleration, entry of lane… Our architecture is based on Multi-Agent Randomized Systems (MARSdeveloped in earlier publications
Randomized algorithms in automatic control and data mining
Granichin, Oleg; Toledano-Kitai, Dvora
2015-01-01
In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.
The Study on On-line Scheduling Algorithm of Imprecise Computation under Transient Overload
WEIZhenhua; HONGBingrong; QIAOYongqiang; CAIZesu; PENGJunjie
2003-01-01
Transient overload always occurs in realtime computer system. An on-line scheduling algorithm of imprecise computation is proposed in this paper to deal with it. This algorithm can be sure of getting acceptable computation result under the condition of overload, and at the same time it can improve the computation precision as much as possible. The system model of imprecise computation and the on-line imprecise computation algorithm are elaborated. And the algorithm is proved correct by simulation.
The Study of Randomized Visual Saliency Detection Algorithm
Yuantao Chen
2013-01-01
Full Text Available Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.
An online peak extraction algorithm for ion mobility spectrometry data.
Kopczynski, Dominik; Rahmann, Sven
2015-01-01
Ion mobility (IM) spectrometry (IMS), coupled with multi-capillary columns (MCCs), has been gaining importance for biotechnological and medical applications because of its ability to detect and quantify volatile organic compounds (VOC) at low concentrations in the air or in exhaled breath at ambient pressure and temperature. Ongoing miniaturization of spectrometers creates the need for reliable data analysis on-the-fly in small embedded low-power devices. We present the first fully automated online peak extraction method for MCC/IMS measurements consisting of several thousand individual spectra. Each individual spectrum is processed as it arrives, removing the need to store the measurement before starting the analysis, as is currently the state of the art. Thus the analysis device can be an inexpensive low-power system such as the Raspberry Pi. The key idea is to extract one-dimensional peak models (with four parameters) from each spectrum and then merge these into peak chains and finally two-dimensional peak models. We describe the different algorithmic steps in detail and evaluate the online method against state-of-the-art peak extraction methods.
A random search algorithm for cyclic delivery synchronization problem
Katarzyna Gdowska
2017-09-01
Full Text Available Background: The paper is devoted to the cyclic delivery synchronization problem with vehicles serving fixed routes. Each vehicle is assigned to a fixed route: the series of supplier’s and logistic centers to be visited one after another. For each route the service frequency is fixed and known in advance. A vehicle loads at a supplier’s, then it delivers goods to a logistic center and either loads other goods there and delivers them to the next logistic center along the route or goes to another logistic center. Each logistic center can belong to several routes, so goods are delivered there with one vehicle and then they departure for the further journey with another truck. The objective of this cyclic delivery synchronization problem is to maximize the total number of synchronizations of vehicles arrivals in logistic centers and their load times, so that it is possible to organize their arrivals in repeatable blocks. Methods: Basing on the previously developed mathematical model for the cyclic delivery synchronization problem we built a random search algorithm for cyclic delivery synchronization problem. The random heuristic search utilizes objective-oriented randomizing. In the paper the newly-developed random search algorithm for cyclic delivery synchronization problem is presented. Results: A computational experiment consisted of employing the newly-developed random search algorithm for solving a series of cyclic delivery synchronization problems. Results obtained with the algorithm were compared with solutions computed with the exact method. Conclusions: The newly-developed random search algorithm for cyclic delivery synchronization problem gives results which are considerably close to the ones obtained with mixed-integer programming. The main advantage of the algorithm is reduction of computing time; it is relevant for utilization of this method in practice, especially for large-sized problems.
Saltan, Fatih
2017-01-01
Online Algorithm Visualization (OAV) is one of the recent developments in the instructional technology field that aims to help students handle difficulties faced when they begin to learn programming. This study aims to investigate the effect of online algorithm visualization on students' achievement in the introduction to programming course. To…
Online Learning Algorithms for Stochastic Water-Filling
Gai, Yi
2011-01-01
Water-filling is the term for the classic solution to the problem of allocating constrained power to a set of parallel channels to maximize the total data-rate. It is used widely in practice, for example, for power allocation to sub-carriers in multi-user OFDM systems such as WiMax. The classic water-filling algorithm is deterministic and requires perfect knowledge of the channel gain to noise ratios. In this paper we consider how to do power allocation over stochastically time-varying (i.i.d.) channels with unknown gain to noise ratio distributions. We adopt an online learning framework based on stochastic multi-armed bandits. We consider two variations of the problem, one in which the goal is to find a power allocation to maximize $\\sum\\limits_i \\mathbb{E}[\\log(1 + SNR_i)]$, and another in which the goal is to find a power allocation to maximize $\\sum\\limits_i \\log(1 + \\mathbb{E}[SNR_i])$. For the first problem, we propose a \\emph{cognitive water-filling} algorithm that we call CWF1. We show that CWF1 obtai...
Preemptive Semi-online Algorithms for Parallel Machine Scheduling with Known Total Size
Yong HE; Hao ZHOU; Yi Wei JIANG
2006-01-01
This paper investigates preemptive semi-online scheduling problems on m identical parallel machines, where the total size of all jobs is known in advance. The goal is to minimize the maximum machine completion time or maximize the minimum machine completion time. For the first objective,we present an optimal semi-online algorithm with competitive ratio 1. For the second objective, we show that the competitive ratio of any semi-online algorithm is at least 2m-3/m-1 for any m ＞ 2 and presentoptimal semi-online algorithms for m = 2,3.
Radha, Mustafa; Garcia-Molina, Gary; Poel, Mannes; Tononi, Giulio
2014-01-01
Automatic sleep staging on an online basis has recently emerged as a research topic motivated by fundamental sleep research. The aim of this paper is to find optimal signal processing methods and machine learning algorithms to achieve online sleep staging on the basis of a single EEG signal. The classification performance obtained using six different EEG signals and various signal processing feature sets is compared using the kappa statistic which has very recently become popular in sleep staging research. A variable duration of the EEG segment (or epoch) to decide on the sleep stage is also analyzed. Spectral-domain, time-domain, linear, and nonlinear features are compared in terms of performance and two types of machine learning approaches (random forests and support vector machines) are assessed. We have determined that frontal EEG signals, with spectral linear features, epoch durations between 18 and 30 seconds, and a random forest classifier lead to optimal classification performance while ensuring real-time online operation.
Probabilistic Analysis of Random Extension-Rotation Algorithms
1981-10-01
Whitney matroid. Matroid theory (see [ Tutte , 19711, (Lawler, 19761) has applicatlons to a wide class of combinatorial optimization problems: where we...observed that Posa’s proof yields a polynomial time I algorithm for constructing Hamiltonian paths in a random instance of Gn. Angluin and Valiant [1979... Tutte , W.T., Introduction to the Theory of x•atroids, American Elsevier, New York, 1971. Walkup, D.W., "On the expected value of a random assignment
On-line Scheduling Algorithm for Penicillin Fed-batch Fermentation
XUE Yao-feng; YUAN Jing-qi
2005-01-01
An on-line scheduling algorithm to maximize gross profit of penicillin fed-batch fermentation is proposed. According to the on-line classification method, fed-batch fermentation batches are classified into three categories. Using the scheduling strategy, the optimal termination sequence of batches is obtained. Pseudo on-line simulations for testing the proposed algorithm with the data from industrial scale penicillin fermentation are carried out.
Miszczak, Jarosław Adam
2013-01-01
The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random
The derivation of on-line algorithms, with an application to finding palindromes
Jeuring, J.T.
1994-01-01
A theory for the derivation of on-line algorithms is presented. The algorithms are derived in the Bird-Meertens calculus for program transformations. This calculus provides a concise functional notation for algorithms, and a few powerful theorems for proving equalities of functions. The
Limited Random Walk Algorithm for Big Graph Data Clustering
Zhang, Honglei; Kiranyaz, Serkan; Gabbouj, Moncef
2016-01-01
Graph clustering is an important technique to understand the relationships between the vertices in a big graph. In this paper, we propose a novel random-walk-based graph clustering method. The proposed method restricts the reach of the walking agent using an inflation function and a normalization function. We analyze the behavior of the limited random walk procedure and propose a novel algorithm for both global and local graph clustering problems. Previous random-walk-based algorithms depend on the chosen fitness function to find the clusters around a seed vertex. The proposed algorithm tackles the problem in an entirely different manner. We use the limited random walk procedure to find attracting vertices in a graph and use them as features to cluster the vertices. According to the experimental results on the simulated graph data and the real-world big graph data, the proposed method is superior to the state-of-the-art methods in solving graph clustering problems. Since the proposed method uses the embarrass...
Performance Characterization of Game Recommendation Algorithms on Online Social Network Sites
Philip Leroux; Bart Dhoedt; Piet Demeester; Filip De Turck
2012-01-01
Since years,online social networks have evolved from profile and communication websites to online portals where people interact with each other,share and consume multimedia-enriched data and play different types of games.Due to the immense popularity of these online games and their huge revenue potential,the number of these games increases every day,resulting in a current offering of thousands of online social games.In this paper,the applicability of neighborhood-based collaborative filtering (CF) algorithms for the recommendation of online social games is evaluated.This evaluation is based on a large dataset of an online social gaming platform containing game ratings (explicit data) and online gaming behavior (implicit data) of millions of active users.Several similarity metrics were implemented and evaluated on the explicit data,implicit data and a combination thereof.It is shown that the neighborhood-based CF algorithms greatly outperform the content-based algorithm,currently often used on online social gaming websites.The results also show that a combined approach,i.e.,taking into account both implicit and explicit data at the same time,yields overall good results on all evaluation metrics for all scenarios,while only slightly performing worse compared to the strengths of the explicit or implicit only approaches.The best performing algorithms have been implemented in a live setup of the online game platform.
An online algorithm for generating fractal hash chains applied to digital chains of custody
Bradford, Phillip G
2007-01-01
This paper gives an online algorithm for generating Jakobsson's fractal hash chains. Our new algorithm compliments Jakobsson's fractal hash chain algorithm for preimage traversal since his algorithm assumes the entire hash chain is precomputed and a particular list of Ceiling(log n) hash elements or pebbles are saved. Our online algorithm for hash chain traversal incrementally generates a hash chain of n hash elements without knowledge of n before it starts. For any n, our algorithm stores only the Ceiling(log n) pebbles which are precisely the inputs for Jakobsson's amortized hash chain preimage traversal algorithm. This compact representation is useful to generate, traverse, and store a number of large digital hash chains on a small and constrained device. We also give an application using both Jakobsson's and our new algorithm applied to digital chains of custody for validating dynamically changing forensics data.
On-line least squares support vector machine algorithm in gas prediction
ZHAO Xiao-hu; WANG Gang; ZHAO Ke-ke; TAN De-jian
2009-01-01
Traditional coal mine safety prediction methods are off-line and do not have dynamic prediction functions. The Support Vector Machine (SVM) is a new machine learning algorithm that has excellent properties. The least squares support vector machine (LS-SVM) algorithm is an improved algorithm of SVM. But the common LS-SVM algorithm, used directly in safety predictions, has some problems. We have first studied gas prediction problems and the basic theory of LS-SVM. Given these problems, we have investigated the affect of the time factor about safety prediction and present an on-line prediction algorithm, based on LS-SVM. Finally, given our observed data, we used the on-line algorithm to predict gas emissions and used other related algorithm to com- pare its performance. The simulation results have verified the validity of the new algorithm.
Stochastic geometry, spatial statistics and random fields models and algorithms
2015-01-01
Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.
俞琰; 邱广华
2013-01-01
在线社交网络已成为用户交互和分享信息的流行的互联网平台.其中,为用户推荐朋友是在线社交网络的一项重要服务.一方面,目前在线社交网络通常基于社会图的局部特性为用户推荐朋友(即,用户问的共同朋友数目).这种方法仅使用路径长度为2的局部结构信息,没有充分利用社会图中各种不同长度的路径及其它信息.另一方面,基于社会图全局特性的在线社交网络朋友推荐方法虽然侦测了整个社会图的结构,但是对于大规模的在线社交网络来说,这类方法的计算成本相当高.为此,本文提出了一个新的在线社交网络朋友推荐方法.它根据“小世界”假说,随机游走有限范围内的所有路径,为用户提供了既快速又准确的朋友推荐.本文使用两个真实的在线社交网络的数据集对新方法进行评估.实验结果显示提出的方法显著增加了在线社交网络朋友推荐的准确性.%Online social networks (OSNs) have become popular, which provide users with a new communication and information sharing Internet platform. In OSNs, Recommending friends to registered users is a crucial task. On the one hand, OSNs often recommend friends for users based on local-based features of the social graph (i. e. based on the number of common friends that tho users share). This method considers only pathways of lenght 2 between users and does not exploit all different length paths of the network and other information. On the other hand, there are global-based approaches of friend recommendation in OSNs which detect all pathway structures of the network. But its computation cost is quite high for large scale OSNs. In this paper, we propose a new approach of friend recommendation in OSNs which traverses all the paths of limited length through randomwalk based on "small world" hypothesis. This new method provides users with both fast and accurate friend recommendation in OSNs. To demonstrate
A random forest algorithm for nowcasting of intense precipitation events
Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh
2017-09-01
Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.
An improved multileaving algorithm for online ranker evaluation
Brost, Brian; Cox, Ingemar Johansson; Seldin, Yevgeny;
2016-01-01
Online ranker evaluation is a key challenge in information retrieval. An important task in the online evaluation of rankers is using implicit user feedback for inferring preferences between rankers. Interleaving methods have been found to be ecient and sensitive, i.e. they can quickly detect even...
Random Matrix Approach to Quantum Adiabatic Evolution Algorithms
Boulatov, Alexei; Smelyanskiy, Vadier N.
2004-01-01
We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.
Combinatorial Approximation Algorithms for MaxCut using Random Walks
Kale, Satyen
2010-01-01
We give the first combinatorial approximation algorithm for Maxcut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an O(n^{b}) algorithm that outputs a (0.5+delta)-approximation for Maxcut, where delta = delta(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex $i$ and a conductance parameter phi, unless a random walk of length ell = O(log n) starting from i mixes rapidly (in terms of phi and ell), we can find a cut of conductance at most phi close to the vertex. The work done per vertex found in the cut is sublinear in n.
Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul
2012-01-01
This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…
KeCo: Kernel-based online co-agreement algorithm
Wiel, L.; Heskes, T.; Levin, E.
2015-01-01
We propose a kernel-based online semi-supervised algorithm that is applicable for large scale learning tasks. In particular, we use a multi-view learning framework and a co-agreement strategy to take into account unlabelled data and to improve classification performance of the algorithm. Unlike the
On-line blind source separation algorithm based on second order statistics
无
2005-01-01
An on-line blind source separation (BSS) algorithm is presented in this paper under the assumption that sources are temporarily correlated signals. By using only some of the observed samples in a recursive calculation, the whitening matrix and the rotation matrix could be approximately obtained through the measurement of only one cost function. Simulations show good performance of the algorithm.
Ma, Yao; Zhao, Tingting; Hatano, Kohei; Sugiyama, Masashi
2016-03-01
We consider the learning problem under an online Markov decision process (MDP) aimed at learning the time-dependent decision-making policy of an agent that minimizes the regret-the difference from the best fixed policy. The difficulty of online MDP learning is that the reward function changes over time. In this letter, we show that a simple online policy gradient algorithm achieves regret O(√T) for T steps under a certain concavity assumption and O(log T) under a strong concavity assumption. To the best of our knowledge, this is the first work to present an online MDP algorithm that can handle continuous state, action, and parameter spaces with guarantee. We also illustrate the behavior of the proposed online policy gradient method through experiments.
A least mean squares CUBIC algorithm for on-line differential of sampled analog signals
Allum, J. H. J.
1975-01-01
A digital computer algorithm is developed for on-line time differentiation of sampled analog voltage signals. The derivative is obtained by employing a least mean squares technique. The recursive algorithm results in a considerable reduction in computer time compared to a complete new solution of the normal equations each time a new data point is accepted. Implementation of the algorithm on a digital computer is discussed. Examples are simulated on a DEC PDP-8 computer.
Dynamically Predicting the Quality of Service: Batch, Online, and Hybrid Algorithms
Ya Chen
2017-01-01
Full Text Available This paper studies the problem of dynamically modeling the quality of web service. The philosophy of designing practical web service recommender systems is delivered in this paper. A general system architecture for such systems continuously collects the user-service invocation records and includes both an online training module and an offline training module for quality prediction. In addition, we introduce matrix factorization-based online and offline training algorithms based on the gradient descent algorithms and demonstrate the fitness of this online/offline algorithm framework to the proposed architecture. The superiority of the proposed model is confirmed by empirical studies on a real-life quality of web service data set and comparisons with existing web service recommendation algorithms.
A Practical Stemming Algorithm for Online Search Assistance.
Ulmschneider, John E.; Doszkocs, Tamas
1983-01-01
Describes a two-phase stemming algorithm which consists of word root identification and automatic selection of word variants starting with same word root from inverted file. Use of algorithm in book catalog file is discussed. Ten references and example of subject search are appended. (EJS)
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
Wicks, Paul; Vaughan, Timothy E; Massagli, Michael P; Heywood, James
2011-05-01
Patients with serious diseases may experiment with drugs that have not received regulatory approval. Online patient communities structured around quantitative outcome data have the potential to provide an observational environment to monitor such drug usage and its consequences. Here we describe an analysis of data reported on the website PatientsLikeMe by patients with amyotrophic lateral sclerosis (ALS) who experimented with lithium carbonate treatment. To reduce potential bias owing to lack of randomization, we developed an algorithm to match 149 treated patients to multiple controls (447 total) based on the progression of their disease course. At 12 months after treatment, we found no effect of lithium on disease progression. Although observational studies using unblinded data are not a substitute for double-blind randomized control trials, this study reached the same conclusion as subsequent randomized trials, suggesting that data reported by patients over the internet may be useful for accelerating clinical discovery and evaluating the effectiveness of drugs already in use.
Guohua Zou
2016-12-01
Full Text Available New medical imaging technology, such as Computed Tomography and Magnetic Resonance Imaging (MRI, has been widely used in all aspects of medical diagnosis. The purpose of these imaging techniques is to obtain various qualitative and quantitative data of the patient comprehensively and accurately, and provide correct digital information for diagnosis, treatment planning and evaluation after surgery. MR has a good imaging diagnostic advantage for brain diseases. However, as the requirements of the brain image definition and quantitative analysis are always increasing, it is necessary to have better segmentation of MR brain images. The FCM (Fuzzy C-means algorithm is widely applied in image segmentation, but it has some shortcomings, such as long computation time and poor anti-noise capability. In this paper, firstly, the Ant Colony algorithm is used to determine the cluster centers and the number of FCM algorithm so as to improve its running speed. Then an improved Markov random field model is used to improve the algorithm, so that its antinoise ability can be improved. Experimental results show that the algorithm put forward in this paper has obvious advantages in image segmentation speed and segmentation effect.
Improved online algorithms for parallel job scheduling and strip packing
Hurink, Johann L.; Paulus, J.J.
2011-01-01
In this paper we consider the online scheduling of jobs which require processing on a number of machines simultaneously. These jobs are presented to a decision maker one by one, where the next job becomes known as soon as the current job is scheduled. The objective is to minimize the makespan
Chen, Limin; Liang, Yin; Wan, Guojin
2012-04-01
An regularization approach is introduced into the online identification of inverse model for predistortion. It is based on a modified backpropagation Levenberg-Marquardt algorithm with sliding window. Adaptive predistorter with feedback was identified respectively based on direct learning and indirect learning architectures. Length of the sliding window was discussed. Compared with the Recursive Prediction Error Method (RPEM) algorithm and Nonlinear Filtered Least-Mean-Square (NFxLMS) algorithm, the algorithm is tested by identification of infinite impulse response Wiener predistorter. It is found that the proposed algorithm is much more efficient than either of the other techniques. The values of the parameters are also smaller than those extracted by the ordinary least-squares algorithm since the proposed algorithm constrains the L2-norm of the parameters.
Gordeev, V. F.; Kabanov, M. M.; Kapustin, S. N.
2017-04-01
In addition to preliminary surveying, landslide slopes stability estimation problems require online real-time monitoring alerting about potential emergencies. Very low frequency monitoring data provided by geodynamic processes automated control system provides a solution to that problem. Authors describe the software and algorithms implemented for that system, make conclusions on the efficiency of applied solutions and propose options for the further development of online very low frequency monitoring system.
Conditional random pattern algorithm for LOH inference and segmentation.
Wu, Ling-Yun; Zhou, Xiaobo; Li, Fuhai; Yang, Xiaorong; Chang, Chung-Che; Wong, Stephen T C
2009-01-01
Loss of heterozygosity (LOH) is one of the most important mechanisms in the tumor evolution. LOH can be detected from the genotypes of the tumor samples with or without paired normal samples. In paired sample cases, LOH detection for informative single nucleotide polymorphisms (SNPs) is straightforward if there is no genotyping error. But genotyping errors are always unavoidable, and there are about 70% non-informative SNPs whose LOH status can only be inferred from the neighboring informative SNPs. This article presents a novel LOH inference and segmentation algorithm based on the conditional random pattern (CRP) model. The new model explicitly considers the distance between two neighboring SNPs, as well as the genotyping error rate and the heterozygous rate. This new method is tested on the simulated and real data of the Affymetrix Human Mapping 500K SNP arrays. The experimental results show that the CRP method outperforms the conventional methods based on the hidden Markov model (HMM). Software is available upon request.
Qu Li
2014-01-01
Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.
Control and monitoring of on-line trigger algorithms using a SCADA system
van Herwijnen, E; Barczyk, A; Damodaran, B; Frank, M; Gaidioz, B; Gaspar, C; Jacobsson, R; Jost, B; Neufeld, N; Bonifazi, F; Callot, O; Lopes, H
2006-01-01
LHCb [1] has an integrated Experiment Control System (ECS) [2], based on the commercial SCADA system PVSS [3]. The novelty of this approach is that, in addition to the usual control and monitoring of experimental equipment, it provides control and monitoring for software processes, namely the on-line trigger algorithms. Algorithms based on Gaudi [4] (the LHCb software framework) compute the trigger decisions on an event filter farm of around 2000 PCs. Gaucho [5], the GAUdi Component Helping Online, was developed to allow the control and monitoring of Gaudi algorithms. Using Gaucho, algorithms can be monitored from the run control system provided by the ECS. To achieve this, Gaucho implements a hierarchical control system using Finite State Machines. In this article we describe the Gaucho architecture, the experience of monitoring a large number of software processes and some requirements for future extensions.
Inner Random Restart Genetic Algorithm for Practical Delivery Schedule Optimization
Sakurai, Yoshitaka; Takada, Kouhei; Onoyama, Takashi; Tsukamoto, Natsuki; Tsuruta, Setsuo
A delivery route optimization that improves the efficiency of real time delivery or a distribution network requires solving several tens to hundreds but less than 2 thousands cities Traveling Salesman Problems (TSP) within interactive response time (less than about 3 second), with expert-level accuracy (less than about 3% of error rate). Further, to make things more difficult, the optimization is subjects to special requirements or preferences of each various delivery sites, persons, or societies. To meet these requirements, an Inner Random Restart Genetic Algorithm (Irr-GA) is proposed and developed. This method combines meta-heuristics such as random restart and GA having different types of simple heuristics. Such simple heuristics are 2-opt and NI (Nearest Insertion) methods, each applied for gene operations. The proposed method is hierarchical structured, integrating meta-heuristics and heuristics both of which are multiple but simple. This method is elaborated so that field experts as well as field engineers can easily understand to make the solution or method easily customized and extended according to customers' needs or taste. Comparison based on the experimental results and consideration proved that the method meets the above requirements more than other methods judging from not only optimality but also simplicity, flexibility, and expandability in order for this method to be practically used.
Lower bounds for 1-, 2- and 3-dimensional on-line bin packing algorithms
G. Galambos; A. van Vliet (André)
1994-01-01
textabstractIn this paper we discuss lower bounds for the asymptotic worst case ratio of on-line algorithms for different kind of bin packing problems. Recently, Galambos and Frenk gave a simple proof of the 1.536 ... lower bound for the 1-dimensional bin packing problem. Following their ideas, we p
Optimal online algorithms for scheduling on two identical machines under a grade of service
无
2006-01-01
This work is aimed at investigating the online scheduling problem on two parallel and identical machines with a new feature that service requests from various customers are entitled to many different grade of service (GoS) levels, so each job and machine are labelled with the GoS levels, and each job can be processed by a particular machine only when its GoS level is no less than that of the machine. The goal is to minimize the makespan. For non-preemptive version, we propose an optimal online algorithm with competitive ratio 5/3. For preemptive version, we propose an optimal online algorithm with competitive ratio 3/2.
Naparstek, Oshri; Leshem, Amir
2013-01-01
In this paper we analyze the expected time complexity of the auction algorithm for the matching problem on random bipartite graphs. We prove that the expected time complexity of the auction algorithm for bipartite matching is $O\\left(\\frac{N\\log^2(N)}{\\log\\left(Np\\right)}\\right)$ on sequential machines. This is equivalent to other augmenting path algorithms such as the HK algorithm. Furthermore, we show that the algorithm can be implemented on parallel machines with $O(\\log(N))$ processors an...
Golant Mitch
2011-08-01
Full Text Available Abstract Background The Internet can increase access to psychosocial care for breast cancer survivors through online support groups. This study will test a novel prosocial online group that emphasizes both opportunities for getting and giving help. Based on the helper therapy principle, it is hypothesized that the addition of structured helping opportunities and coaching on how to help others online will increase the psychological benefits of a standard online group. Methods/Design A two-armed randomized controlled trial with pretest and posttest. Non-metastatic breast cancer survivors with elevated psychological distress will be randomized to either a standard facilitated online group or to a prosocial facilitated online group, which combines online exchanges of support with structured helping opportunities (blogging, breast cancer outreach and coaching on how best to give support to others. Validated and reliable measures will be administered to women approximately one month before and after the interventions. Self-esteem, positive affect, and sense of belonging will be tested as potential mediators of the primary outcomes of depressive/anxious symptoms and sense of purpose in life. Discussion This study will test an innovative approach to maximizing the psychological benefits of cancer online support groups. The theory-based prosocial online support group intervention model is sustainable, because it can be implemented by private non-profit or other organizations, such as cancer centers, which mostly offer face-to-face support groups with limited patient reach. Trial Registration ClinicalTrials.gov: NCT01396174
Ilic, Velimir M; Todorovic, Branimir T; Stankovic, Miomir S
2010-01-01
The paper proposes a new recursive algorithm for the exact computation of the linear chain conditional random fields gradient. The algorithm is an instance of the Entropy Message Passing (EMP), introduced in our previous work, and has the purpose to enhance memory efficiency when applied to long observation sequences. Unlike the traditional algorithm based on the forward and the backward recursions, the memory complexity of our algorithm does not depend on the sequence length, having the same computational complexity as the standard algorithm.
A voltage resonance-based single-ended online fault location algorithm for DC distribution networks
JIA Ke; LI Meng; BI TianShu; YANG QiXun
2016-01-01
A novel single-ended online fault location algorithm is investigated for DC distribution networks.The proposed algorithm calculates the fault distance based on the characteristics of the voltage resonance.The Prony's method is introduced to extract the characteristics.A novel method is proposed to solve the pseudo dual-root problem in the calculation process.The multiple data windows are adopted to enhance the robustness of the proposed algorithm.An index is proposed to evaluate the accuracy and validity of the results derived from the various data windows.The performances of the proposed algorithm in different fault scenarios were evaluated using the PSCAD/EMTDC simulations.The results show that the algorithm can locate the faults with transient resistance using the 1.6 ms data of the DC-side voltage after a fault inception and offers a good precision.
Ehwa Yang
2017-03-01
Full Text Available Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT. In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable.
An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor
Guangming Zhu
2016-01-01
Full Text Available Continuous human action recognition (CHAR is more practical in human-robot interactions. In this paper, an online CHAR algorithm is proposed based on skeletal data extracted from RGB-D images captured by Kinect sensors. Each human action is modeled by a sequence of key poses and atomic motions in a particular order. In order to extract key poses and atomic motions, feature sequences are divided into pose feature segments and motion feature segments, by use of the online segmentation method based on potential differences of features. Likelihood probabilities that each feature segment can be labeled as the extracted key poses or atomic motions, are computed in the online model matching process. An online classification method with variable-length maximal entropy Markov model (MEMM is performed based on the likelihood probabilities, for recognizing continuous human actions. The variable-length MEMM method ensures the effectiveness and efficiency of the proposed CHAR method. Compared with the published CHAR methods, the proposed algorithm does not need to detect the start and end points of each human action in advance. The experimental results on public datasets show that the proposed algorithm is effective and highly-efficient for recognizing continuous human actions.
Distributed Random Access Algorithm: Scheduling and Congesion Control
Jiang, Libin; Shin, Jinwoo; Walrand, Jean
2009-01-01
This paper provides proofs of the rate stability, Harris recurrence, and epsilon-optimality of CSMA algorithms where the backoff parameter of each node is based on its backlog. These algorithms require only local information and are easy to implement. The setup is a network of wireless nodes with a fixed conflict graph that identifies pairs of nodes whose simultaneous transmissions conflict. The paper studies two algorithms. The first algorithm schedules transmissions to keep up with given arrival rates of packets. The second algorithm controls the arrivals in addition to the scheduling and attempts to maximize the sum of the utilities of the flows of packets at the different nodes. For the first algorithm, the paper proves rate stability for strictly feasible arrival rates and also Harris recurrence of the queues. For the second algorithm, the paper proves the epsilon-optimality. Both algorithms operate with strictly local information in the case of decreasing step sizes, and operate with the additional info...
The Sexunzipped trial: optimizing the design of online randomized controlled trials.
Bailey, Julia V; Pavlou, Menelaos; Copas, Andrew; McCarthy, Ona; Carswell, Ken; Rait, Greta; Hart, Graham; Nazareth, Irwin; Free, Caroline; French, Rebecca; Murray, Elizabeth
2013-12-11
Sexual health problems such as unwanted pregnancy and sexually transmitted infection are important public health concerns and there is huge potential for health promotion using digital interventions. Evaluations of digital interventions are increasingly conducted online. Trial administration and data collection online offers many advantages, but concerns remain over fraudulent registration to obtain compensation, the quality of self-reported data, and high attrition. This study addresses the feasibility of several dimensions of online trial design-recruitment, online consent, participant identity verification, randomization and concealment of allocation, online data collection, data quality, and retention at 3-month follow-up. Young people aged 16 to 20 years and resident in the United Kingdom were recruited to the "Sexunzipped" online trial between November 2010 and March 2011 (n=2036). Participants filled in baseline demographic and sexual health questionnaires online and were randomized to the Sexunzipped interactive intervention website or to an information-only control website. Participants were also randomly allocated to a postal request (or no request) for a urine sample for genital chlamydia testing and receipt of a lower (£10/US$16) or higher (£20/US$32) value shopping voucher compensation for 3-month outcome data. The majority of the 2006 valid participants (90.98%, 1825/2006) were aged between 18 and 20 years at enrolment, from all four countries in the United Kingdom. Most were white (89.98%, 1805/2006), most were in school or training (77.48%, 1545/1994), and 62.81% (1260/2006) of the sample were female. In total, 3.88% (79/2036) of registrations appeared to be invalid and another 4.00% (81/2006) of participants gave inconsistent responses within the questionnaire. The higher value compensation (£20/US$32) increased response rates by 6-10%, boosting retention at 3 months to 77.2% (166/215) for submission of online self-reported sexual health
Halperin, S.; Zwick, U. [Tel Aviv Univ. (Israel)
1996-12-31
We present the first randomized O(log n) time and O(m + n) work EREW PRAM algorithm for finding a spanning forest of an undirected graph G = (V, E) with n vertices and m edges. Our algorithm is optimal with respect to time, work and space. As a consequence we get optimal randomized EREW PRAM algorithms for other basic connectivity problems such as finding a bipartite partition, finding bridges and biconnected components, and finding Euler tours in Eulerean graphs. For other problems such as finding an ear decomposition, finding an open ear decomposition, finding a strong orientation, and finding an st-numbering we get optimal randomized CREW PRAM algorithms.
Evolutionary algorithm based offline/online path planner for UAV navigation.
Nikolos, I K; Valavanis, K P; Tsourveloudis, N C; Kostaras, A N
2003-01-01
An evolutionary algorithm based framework, a combination of modified breeder genetic algorithms incorporating characteristics of classic genetic algorithms, is utilized to design an offline/online path planner for unmanned aerial vehicles (UAVs) autonomous navigation. The path planner calculates a curved path line with desired characteristics in a three-dimensional (3-D) rough terrain environment, represented using B-spline curves, with the coordinates of its control points being the evolutionary algorithm artificial chromosome genes. Given a 3-D rough environment and assuming flight envelope restrictions, two problems are solved: i) UAV navigation using an offline planner in a known environment, and, ii) UAV navigation using an online planner in a completely unknown environment. The offline planner produces a single B-Spline curve that connects the starting and target points with a predefined initial direction. The online planner, based on the offline one, is given on-board radar readings which gradually produces a smooth 3-D trajectory aiming at reaching a predetermined target in an unknown environment; the produced trajectory consists of smaller B-spline curves smoothly connected with each other. Both planners have been tested under different scenarios, and they have been proven effective in guiding an UAV to its final destination, providing near-optimal curved paths quickly and efficiently.
Chen, Xinjia
2015-05-01
We consider the general problem of analysis and design of control systems in the presence of uncertainties. We treat uncertainties that affect a control system as random variables. The performance of the system is measured by the expectation of some derived random variables, which are typically bounded. We develop adaptive sequential randomized algorithms for estimating and optimizing the expectation of such bounded random variables with guaranteed accuracy and confidence level. These algorithms can be applied to overcome the conservatism and computational complexity in the analysis and design of controllers to be used in uncertain environments. We develop methods for investigating the optimality and computational complexity of such algorithms.
Phase Transitions in Sampling Algorithms and the Underlying Random Structures
Randall, Dana
Sampling algorithms based on Markov chains arise in many areas of computing, engineering and science. The idea is to perform a random walk among the elements of a large state space so that samples chosen from the stationary distribution are useful for the application. In order to get reliable results, we require the chain to be rapidly mixing, or quickly converging to equilibrium. For example, to sample independent sets in a given graph G, the so-called hard-core lattice gas model, we can start at any independent set and repeatedly add or remove a single vertex (if allowed). By defining the transition probabilities of these moves appropriately, we can ensure that the chain will converge to a use- ful distribution over the state space Ω. For instance, the Gibbs (or Boltzmann) distribution, parameterized by Λ> 0, is defined so that p(Λ) = π(I) = Λ|I| /Z, where Z = sum_{J in Ω} Λ^{|J|} is the normalizing constant known as the partition function. An interesting phenomenon occurs as Λ is varied. For small values of Λ, local Markov chains converge quickly to stationarity, while for large values, they are prohibitively slow. To see why, imagine the underlying graph G is a region of the Cartesian lattice. Large independent sets will dominate the stationary distribution π when Λ is sufficiently large, and yet it will take a very long time to move from an independent set lying mostly on the odd sublattice to one that is mostly even. This phenomenon is well known in the statistical physics community, and characterizes by a phase transition in the underlying model.
Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR
Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz, A.; Kraus, J.; Pleiter, D.
2015-12-01
P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.
OxMaR: open source free software for online minimization and randomization for clinical trials.
O'Callaghan, Christopher A
2014-01-01
Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.
OxMaR: open source free software for online minimization and randomization for clinical trials.
Christopher A O'Callaghan
Full Text Available Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.
Can WANG; Zi-yu GUAN; Chun CHEN; Jia-jun BU; Jun-feng WANG; Huai-zhong LIN
2009-01-01
Focused crawling is an important technique for topical resource discovery on the Web. The key issue in focusedcrawling is to prioritize uncrawled uniform resource locators (URLs) in the frontier to focus the crawling on relevant pages.Traditional focused crawlers mainly rely on content analysis. Link-based techniques are not effectively exploited despite their usefulness. In this paper, we propose a new frontier prioritizing algorithm, namely the on-line topical importance estimation (OTIE) algorithm. OTIE combines link-and content-based analysis to evaluate the priority of an uncrawled URL in the frontier. We performed real crawling experiments over 30 topics selected from the Open Directory Project (ODP) and compared harvest rate and target recall of the four crawling algorithms: breadth-first, link-context-prediction, on-line page importance computation (OPlC) and our OTIE. Experimental results showed that OTIE significantly outperforms the other three algorithms on the average target recall while maintaining an acceptable harvest rate. Moreover, OTIE is much faster than the traditional focused crawling algorithm.
Online algorithms for scheduling with machine activation cost on two uniform machines
HAN Shu-guang; JIANG Yi-wei; HU Jue-liang
2007-01-01
In this paper we investigate a variant of the scheduling problem on two uniform machines with speeds 1 and s. For this problem, we are given two potential uniform machines to process a sequence of independent jobs. Machines need to be activated before starting to process, and each machine activated incurs a fixed machine activation cost. No machines are initially activated,and when a job is revealed, the algorithm has the option to activate new machines. The objective is to minimize the sum of the makespan and the machine activation cost. We design optimal online algorithms with competitive ratio of (2s+1)/(s+1) for every s≥1.
Research on a randomized real-valued negative selection algorithm
无
2006-01-01
A real-valued negative selection algorithm with good mathematical foundation is presented to solve some of the drawbacks of previous approach. Specifically, it can produce a good estimate of the optimal number of detectors needed to cover the non-self space, and the maximization of the non-self coverage is done through an optimization algorithm with proven convergence properties. Experiments are performed to validate the assumptions made while designing the algorithm and to evaluate its performance.
无
2002-01-01
Routing and wavelength assignment for online real-time multicast connection setup is a difficulttask due to the dynamic change of availabilities of wavelengths on links and the consideration of wave-length conversion delay in WDM networks. This paper presents a distributed routing and wavelength as-signment scheme for the setup of real-time multicast connections. It integrates routing and wavelength as-signment as a single process, which greatly reduces the connection setup time. The proposed routingmethod is based on the Prim's MST (Minimum Spanning Tree) algorithm and the K-restricted breadth-first search method, which can produce a sub-minimal cost tree under a given delay bound. The wave-length assignment uses the least-conversion and load balancing strategies. Simulation results show that theproposed algorithm is suitable for online multicast connection establishment in WDM networks.
An on-line algorithm for creating self-organizing fuzzy neural networks.
Leng, Gang; Prasad, Girijesh; McGinnity, Thomas Martin
2004-12-01
This paper presents a new on-line algorithm for creating a self-organizing fuzzy neural network (SOFNN) from sample patterns to implement a singleton or Takagi-Sugeno (TS) type fuzzy model. The SOFNN is based on ellipsoidal basis function (EBF) neurons consisting of a center vector and a width vector. New methods of the structure learning and the parameter learning, based on new adding and pruning techniques and a recursive on-line learning algorithm, are proposed and developed. A proof of the convergence of both the estimation error and the linear network parameters is also given in the paper. The proposed methods are very simple and effective and generate a fuzzy neural model with a high accuracy and compact structure. Simulation work shows that the SOFNN has the capability of self-organization to determine the structure and parameters of the network automatically.
Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice
2007-12-01
Acknowledgments I would like to thank my advisor Stephen Smith, my co-author Daniel Golovin , my com- mittee members Avrim Blum, Carla Gomes, John Hooker, and...of the 13th European Conference on Artificial Intelligence (ECAI-98), pages 244–248, 1998. 4.2.2 [76] Matthew Streeter and Daniel Golovin . Online...algorithms for maximizing submodu- lar functions. Working paper, 2007. 1.1, 2.1.3 [77] Matthew Streeter, Daniel Golovin , and Stephen F. Smith. Combining
A Novel Algorithm of Quantum Random Walk in Server Traffic Control and Task Scheduling
Dong Yumin
2014-01-01
Full Text Available A quantum random walk optimization model and algorithm in network cluster server traffic control and task scheduling is proposed. In order to solve the problem of server load balancing, we research and discuss the distribution theory of energy field in quantum mechanics and apply it to data clustering. We introduce the method of random walk and illuminate what the quantum random walk is. Here, we mainly research the standard model of one-dimensional quantum random walk. For the data clustering problem of high dimensional space, we can decompose one m-dimensional quantum random walk into m one-dimensional quantum random walk. In the end of the paper, we compare the quantum random walk optimization method with GA (genetic algorithm, ACO (ant colony optimization, and SAA (simulated annealing algorithm. In the same time, we prove its validity and rationality by the experiment of analog and simulation.
Time series online prediction algorithm based on least squares support vector machine
WU Qiong; LIU Wen-ying; YANG Yi-han
2007-01-01
Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive calculation of block matrix, a new time series online prediction algorithm based on improved LS-SVM was proposed. The historical training results were fully utilized and the computing speed of LS-SVM was enhanced. Then, the improved algorithm was applied to time series online prediction. Based on the operational data provided by the Northwest Power Grid of China, the method was used in the transient stability prediction of electric power system. The results show that, compared with the calculation time of the traditional LS-SVM(75-1 600 ms), that of the proposed method in different time windows is 40-60 ms, and the prediction accuracy(normalized root mean squared error) of the proposed method is above 0.8. So the improved method is better than the traditional LS-SVM and more suitable for time series online prediction.
TH-E-BRE-04: An Online Replanning Algorithm for VMAT
Ahunbay, E; Li, X [Medical College of Wisconsin, Milwaukee, WI (United States); Moreau, M [Elekta, Inc, Verona, WI (Italy)
2014-06-15
Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation
A novel quantum random number generation algorithm used by smartphone camera
Wu, Nan; Wang, Kun; Hu, Haixing; Song, Fangmin; Li, Xiangdong
2015-05-01
We study an efficient algorithm to extract quantum random numbers (QRN) from the raw data obtained by charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) based sensors, like a camera used in a commercial smartphone. Based on NIST statistical test for random number generators, the proposed algorithm has a high QRN generation rate and high statistical randomness. This algorithm provides a kind of simple, low-priced and reliable devices as a QRN generator for quantum key distribution (QKD) or other cryptographic applications.
A Randomized Controlled Trial of Online versus Clinic-Based CBT for Adolescent Anxiety
Spence, Susan H.; Donovan, Caroline L.; March, Sonja; Gamble, Amanda; Anderson, Renee E.; Prosser, Samantha; Kenardy, Justin
2011-01-01
Objective: The study examined the relative efficacy of online (NET) versus clinic (CLIN) delivery of cognitive behavior therapy (CBT) in the treatment of anxiety disorders in adolescents. Method: Participants included 115 clinically anxious adolescents aged 12 to 18 years and their parent(s). Adolescents were randomly assigned to NET, CLIN, or…
A Streaming Algorithm for Online Estimation of Temporal and Spatial Extent of Delays
Kittipong Hiriotappa
2017-01-01
Full Text Available Knowing traffic congestion and its impact on travel time in advance is vital for proactive travel planning as well as advanced traffic management. This paper proposes a streaming algorithm to estimate temporal and spatial extent of delays online which can be deployed with roadside sensors. First, the proposed algorithm uses streaming input from individual sensors to detect a deviation from normal traffic patterns, referred to as anomalies, which is used as an early indication of delay occurrence. Then, a group of consecutive sensors that detect anomalies are used to temporally and spatially estimate extent of delay associated with the detected anomalies. Performance evaluations are conducted using a real-world data set collected by roadside sensors in Bangkok, Thailand, and the NGSIM data set collected in California, USA. Using NGSIM data, it is shown qualitatively that the proposed algorithm can detect consecutive occurrences of shockwaves and estimate their associated delays. Then, using a data set from Thailand, it is shown quantitatively that the proposed algorithm can detect and estimate delays associated with both recurring congestion and incident-induced nonrecurring congestion. The proposed algorithm also outperforms the previously proposed streaming algorithm.
Radha, Mustafa; Garcia Molina, Gary; Poel, Mannes; Tononi, Giulio
Automatic sleep staging on an online basis has recently emerged as a research topic motivated by fundamental sleep research. The aim of this paper is to find optimal signal processing methods and machine learning algorithms to achieve online sleep staging on the basis of a single EEG signal. The
Radha, Mustafa; Garcia-Molina, Gary; Poel, Mannes; Tononi, Giulio
2014-01-01
Automatic sleep staging on an online basis has recently emerged as a research topic motivated by fundamental sleep research. The aim of this paper is to find optimal signal processing methods and machine learning algorithms to achieve online sleep staging on the basis of a single EEG signal. The cla
Analysis of an on-line algorithm for solving large Markov chains
Litvak, Nelly; Robert, Philippe
2008-01-01
Algorithms for ranking of web pages such as Google Page-Rank assign importance scores according to a stationary distribution of a Markov random walk on the web graph. Although in the classical search scheme the ranking scores are pre-computed off-line, several challenging problems in contemporary we
Composite Chaotic Pseudo-Random Sequence Encryption Algorithm for Compressed Video
袁春; 钟玉琢; 杨士强
2004-01-01
Stream cryptosystems, which implement encryption by selecting parts of the block data and header information of the compressed video stream, achieve good real-time encryption with high flexibility. Chaotic random number generator-based approaches, for example, logistics maps, are comparatively promising approachs, but are vulnerable to attacks by nonlinear dynamic forecasting. A composite chaotic cryptography scheme was developed to encrypt the compressed video with the logistics map with a Z(231?1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography. The scheme maintained real-time performance and flexibility of the chaotic sequence cryptography. The scheme also integrated asymmetrical public-key cryptography and encryption and identity authentification of control parameters at the initialization phase. Encryption is performed in a layered scheme based on the importance of the data in a compressed video stream. The composite chaotic cryptography scheme has the advantage that the value and updating frequency of the control parameters can be changed online to satisfy the network requirements and the processor capability, as well as the security requirements. Cryptanalysis shows that the scheme guarantees robust security,provides good real-time performance,and has flexible implementation. Statistical evaluations and tests verify that the scheme is effective.
Online distribution channel increases article usage on Mendeley: a randomized controlled trial.
Kudlow, Paul; Cockerill, Matthew; Toccalino, Danielle; Dziadyk, Devin Bissky; Rutledge, Alan; Shachak, Aviv; McIntyre, Roger S; Ravindran, Arun; Eysenbach, Gunther
2017-01-01
Prior research shows that article reader counts (i.e. saves) on the online reference manager, Mendeley, correlate to future citations. There are currently no evidenced-based distribution strategies that have been shown to increase article saves on Mendeley. We conducted a 4-week randomized controlled trial to examine how promotion of article links in a novel online cross-publisher distribution channel (TrendMD) affect article saves on Mendeley. Four hundred articles published in the Journal of Medical Internet Research were randomized to either the TrendMD arm (n = 200) or the control arm (n = 200) of the study. Our primary outcome compares the 4-week mean Mendeley saves of articles randomized to TrendMD versus control. Articles randomized to TrendMD showed a 77% increase in article saves on Mendeley relative to control. The difference in mean Mendeley saves for TrendMD articles versus control was 2.7, 95% CI (2.63, 2.77), and statistically significant (p Mendeley (Spearman's rho r = 0.60). This is the first randomized controlled trial to show how an online cross-publisher distribution channel (TrendMD) enhances article saves on Mendeley. While replication and further study are needed, these data suggest that cross-publisher article recommendations via TrendMD may enhance citations of scholarly articles.
Random forest algorithm for classification of multiwavelength data
Dan Gao; Yan-Xia Zhang; Yong-Heng Zhao
2009-01-01
We introduced a decision tree method called Random Forests for multiwavelength data classification. The data were adopted from different databases, including the Sloan Digital Sky Survey (SDSS) Data Release five, USNO, FIRST and ROSAT.We then studied the discrimination of quasars from stars and the classification of quasars,stars and galaxies with the sample from optical and radio bands and with that from optical and X-ray bands. Moreover, feature selection and feature weighting based on Random Forests were investigated. The performances based on different input patterns were compared. The experimental results show that the random forest method is an effective method for astronomical object classification and can be applied to other classification problems faced in astronomy. In addition, Random Forests will show its superiorities due to its own merits, e.g. classification, feature selection, feature weighting as well as outlier detection.
An Event-Triggered Online Energy Management Algorithm of Smart Home: Lyapunov Optimization Approach
Wei Fan
2016-05-01
Full Text Available As an important component of the smart grid on the user side, a home energy management system is the core of optimal operation for a smart home. In this paper, the energy scheduling problem for a household equipped with photovoltaic devices was investigated. An online energy management algorithm based on event triggering was proposed. The Lyapunov optimization method was adopted to schedule controllable load in the household. Without forecasting related variables, real-time decisions were made based only on the current information. Energy could be rapidly regulated under the fluctuation of distributed generation, electricity demand and market price. The event-triggering mechanism was adopted to trigger the execution of the online algorithm, so as to cut down the execution frequency and unnecessary calculation. A comprehensive result obtained from simulation shows that the proposed algorithm could effectively decrease the electricity bills of users. Moreover, the required computational resource is small, which contributes to the low-cost energy management of a smart home.
Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua
2015-01-15
Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.
Han Zou
2015-01-01
Full Text Available Nowadays, developing indoor positioning systems (IPSs has become an attractive research topic due to the increasing demands on location-based service (LBS in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.
Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji
2013-04-01
Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.
Ecodriver. D23.1: Report on test scenarios for val-idation of on-line vehicle algorithms
Seewald, P.; Ivens, T.W.T.; Spronkmans, S.
2014-01-01
This deliverable provides a description of test scenarios that will be used for validation of WP22’s on-line vehicle algorithms. These algorithms consist of the two modules VE³ (Vehicle Energy and Environment Estimator) and RSG (Reference Signal Genera-tor) and will be tested using the Matlab/Simuli
Collision-Resolution Algorithms and Random-Access Communications.
1980-04-01
there would be no wait required before they could proceed with the algoritm . Capetanakis called this scheme the parallel tree algorithm to contrast it...the number of additional slots that have already been allocated for further trans- missions within the CRI in progress. The parallel tree algoritm
A Comparison of Online versus On-site Training in Health Research Methodology: A Randomized Study
Kanchanaraksa Sukon
2011-06-01
Full Text Available Abstract Background Distance learning may be useful for building health research capacity. However, evidence that it can improve knowledge and skills in health research, particularly in resource-poor settings, is limited. We compared the impact and acceptability of teaching two distinct content areas, Biostatistics and Research Ethics, through either on-line distance learning format or traditional on-site training, in a randomized study in India. Our objective was to determine whether on-line courses in Biostatistics and Research Ethics could achieve similar improvements in knowledge, as traditional on-site, classroom-based courses. Methods Subjects: Volunteer Indian scientists were randomly assigned to one of two arms. Intervention: Students in Arm 1 attended a 3.5-day on-site course in Biostatistics and completed a 3.5-week on-line course in Research Ethics. Students in Arm 2 attended a 3.5-week on-line course in Biostatistics and 3.5-day on-site course in Research Ethics. For the two course formats, learning objectives, course contents and knowledge tests were identical. Main Outcome Measures: Improvement in knowledge immediately and 3-months after course completion, compared to baseline. Results Baseline characteristics were similar in both arms (n = 29 each. Median knowledge score for Biostatistics increased from a baseline of 49% to 64% (p Conclusion On-line and on-site training formats led to marked and similar improvements of knowledge in Biostatistics and Research Ethics. This, combined with logistical and cost advantages of on-line training, may make on-line courses particularly useful for expanding health research capacity in resource-limited settings.
A comparison of online versus on-site training in health research methodology: a randomized study.
Aggarwal, Rakesh; Gupte, Nikhil; Kass, Nancy; Taylor, Holly; Ali, Joseph; Bhan, Anant; Aggarwal, Amita; Sisson, Stephen D; Kanchanaraksa, Sukon; McKenzie-White, Jane; McGready, John; Miotti, Paolo; Bollinger, Robert C
2011-06-17
Distance learning may be useful for building health research capacity. However, evidence that it can improve knowledge and skills in health research, particularly in resource-poor settings, is limited. We compared the impact and acceptability of teaching two distinct content areas, Biostatistics and Research Ethics, through either on-line distance learning format or traditional on-site training, in a randomized study in India. Our objective was to determine whether on-line courses in Biostatistics and Research Ethics could achieve similar improvements in knowledge, as traditional on-site, classroom-based courses. Volunteer Indian scientists were randomly assigned to one of two arms. Students in Arm 1 attended a 3.5-day on-site course in Biostatistics and completed a 3.5-week on-line course in Research Ethics. Students in Arm 2 attended a 3.5-week on-line course in Biostatistics and 3.5-day on-site course in Research Ethics. For the two course formats, learning objectives, course contents and knowledge tests were identical. Improvement in knowledge immediately and 3-months after course completion, compared to baseline. Baseline characteristics were similar in both arms (n = 29 each). Median knowledge score for Biostatistics increased from a baseline of 49% to 64% (p platforms for both Biostatistics (16% vs. 12%; p = 0.59) and Research Ethics (17% vs. 13%; p = 0.14). On-line and on-site training formats led to marked and similar improvements of knowledge in Biostatistics and Research Ethics. This, combined with logistical and cost advantages of on-line training, may make on-line courses particularly useful for expanding health research capacity in resource-limited settings.
Mohammad Mahdi Ebrahimi
2013-11-01
Full Text Available In this research, an artificial chattering free adaptive fuzzy modified sliding mode control design and application to continuum robotic manipulator has proposed in order to design high performance nonlinear controller in the presence of uncertainties. Regarding to the positive points in sliding mode controller, fuzzy logic controller and online tuning method, the output improves. Each method by adding to the previous controller has covered negative points. The main target in this research is design of model free estimator on-line sliding mode fuzzy algorithm for continuum robot manipulator to reach an acceptable performance. Continuum robot manipulators are highly nonlinear, and a number of parameters are uncertain, therefore design model free controller by both analytical and empirical paradigms are the main goal. Although classical sliding mode methodology has acceptable performance with known dynamic parameters such as stability and robustness but there are two important disadvantages as below: chattering phenomenon and mathematical nonlinear dynamic equivalent controller part. To solve the chattering fuzzy logic inference applied instead of dead zone function. To solve the equivalent problems in classical sliding mode controller this paper focuses on applied on-line tuning method in classical controller. This algorithm works very well in certain and uncertain environment. The system performance in sliding mode controller is sensitive to the sliding function. Therefore, compute the optimum value of sliding function for a system is the next challenge. This problem has solved by adjusting sliding function of the on-line method continuously in real-time. In this way, the overall system performance has improved with respect to the classical sliding mode controller. This controller solved chattering phenomenon as well as mathematical nonlinear equivalent part by applied modified PID supervisory method in modified fuzzy sliding mode controller and
Quantization of Random Walks: Search Algorithms and Hitting Time
Santha, Miklos
Many classical search problems can be cast in the following abstract framework: Given a finite set X and a subset M ⊆ X of marked elements, detect if M is empty or not, or find an element in M if there is any. When M is not empty, a naive approach to the finding problem is to repeatedly pick a uniformly random element of X until a marked element is sampled. A more sophisticated approach might use a Markov chain, that is a random walk on the state space X in order to generate the samples. In that case the resources spent for previous steps are often reused to generate the next sample. Random walks also model spatial search in physical regions where the possible moves are expressed by the edges of some specific graph. The hitting time of a Markov chain is the number of steps necessary to reach a marked element, starting from the stationary distribution of the chain.
THE DECISION OF THE OPTIMAL PARAMETERS IN MARKOV RANDOM FIELDS OF IMAGES BY GENETIC ALGORITHM
无
2000-01-01
This paper introduces the principle of genetic algorithm and the basic method of solving Markov random field parameters.Focusing on the shortcomings in present methods,a new method based on genetic algorithms is proposed to solve the parameters in the Markov random field.The detailed procedure is discussed.On the basis of the parameters solved by genetic algorithms,some experim ents on classification of aerial images are given.Experimental results show that the proposed method is effective and the classification results are satisfactory.
A simple consensus algorithm for distributed averaging in random geographical networks
Mahdi Jalili
2012-09-01
Random geographical networks are realistic models for wireless sensor networks which are used in many applications. Achieving average consensus is very important in sensor networks and the faster the consensus is, the durable the sensors’ life, and thus, the better the performance of the network. In this paper we compared the performance of a number of linear consensus algorithms with application to distributed averaging in random geographical networks. Interestingly, the simplest algorithm – where only the degree of receiving nodes is needed for the averaging – had the best performance in terms of the consensus time. Furthermore, we proved that the network has guaranteed convergence with this simple algorithm.
Neumann, Frank; Witt, Carsten
2015-01-01
combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
Randomized Algorithms for Tracking Distributed Count, Frequencies, and Ranks
Huang, Zengfeng; Yi, Ke; Zhang, Qin
2011-01-01
We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the {\\em count-tracking} problem, where there are $k$ players, each holding a counter $n_i$ that gets incremented over time, and the goal is to track an $\\eps......$-approximation of their sum $n=\\sum_i n_i$ continuously at all times, using minimum communication. While the deterministic communication complexity of the problem is $\\Theta(k/\\eps \\cdot \\log N)$, where $N$ is the final value of $n$ when the tracking finishes, we show that with randomization, the communication cost can...
The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms
Raghavan, Prabhakar
By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.
OCR Post-Processing Error Correction Algorithm using Google Online Spelling Suggestion
Bassil, Youssef
2012-01-01
With the advent of digital optical scanners, a lot of paper-based books, textbooks, magazines, articles, and documents are being transformed into an electronic version that can be manipulated by a computer. For this purpose, OCR, short for Optical Character Recognition was developed to translate scanned graphical text into editable computer text. Unfortunately, OCR is still imperfect as it occasionally mis-recognizes letters and falsely identifies scanned text, leading to misspellings and linguistics errors in the OCR output text. This paper proposes a post-processing context-based error correction algorithm for detecting and correcting OCR non-word and real-word errors. The proposed algorithm is based on Google's online spelling suggestion which harnesses an internal database containing a huge collection of terms and word sequences gathered from all over the web, convenient to suggest possible replacements for words that have been misspelled during the OCR process. Experiments carried out revealed a signific...
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-09-22
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
Ma, Tianren; Xia, Zhengyou
2017-05-01
Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.
REAL-TIME FACE TRACKING ALGORITHM BASED ON ONLINE INCREMENTAL LEARNING%基于在线增量学习的实时人脸跟踪算法
包芳; 张炎凯; 王士同
2016-01-01
提出基于在线增量式极端随机森林分类器的实时人脸跟踪算法。算法用在线极端随机森林分类器实现基于检测的跟踪，并结合动态目标框架和 P-N 学习矫正检测的错误。实验结果表明，该算法能够在不确定背景下对任意人脸实现较长时间段内的稳定快速的实时跟踪，并能有效排除背景等的干扰，效果较好。%The paper proposes a real-time face tracking algorithm,which is based on online incremental extremely random forests classifier.The algorithm achieves detection-based real-time tracking using online incremental extremely random forests classifier,and combines dynamic target framework and P-N learning to correct detection errors.Experimental results show,the proposed algorithm can realise fast and stable real-time tracking for any face in a longer period under uncertain background,and can effectively overcome interferences such as background with preferable effect.
Optimized quantum random-walk search algorithm for multi-solution search
张宇超; 鲍皖苏; 汪翔; 付向群
2015-01-01
This study investigates the multi-solution search of the optimized quantum random-walk search algorithm on the hypercube. Through generalizing the abstract search algorithm which is a general tool for analyzing the search on the graph to the multi-solution case, it can be applied to analyze the multi-solution case of quantum random-walk search on the graph directly. Thus, the computational complexity of the optimized quantum random-walk search algorithm for the multi-solution search is obtained. Through numerical simulations and analysis, we obtain a critical value of the proportion of solutions q. For a given q, we derive the relationship between the success rate of the algorithm and the number of iterations when q is no longer than the critical value.
Randomized algorithms for tracking distributed count, frequencies, and ranks
Zengfeng, Huang; Ke, Yi; Zhang, Qin
2012-01-01
We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the count-tracking problem, where there are k players, each holding a counter ni that gets incremented over time, and the goal is to track an ∑-approximation...
Heterogeneous Web Data Extraction Algorithm Based On Modified Hidden Conditional Random Fields
Cui Cheng
2014-01-01
As it is of great importance to extract useful information from heterogeneous Web data, in this paper, we propose a novel heterogeneous Web data extraction algorithm using a modified hidden conditional random fields model. Considering the traditional linear chain based conditional random fields can not effectively solve the problem of complex and heterogeneous Web data extraction, we modify the standard hidden conditional random fields in three aspects, which are 1) Using the hidden Markov mo...
Analysis of an iterated local search algorithm for vertex cover in sparse random graphs
Witt, Carsten
2012-01-01
algorithm by Aronson et al. (1998) [1]. Subsequently, theoretical supplements are given to experimental studies of search heuristics on random graphs. For c...Recently, various randomized search heuristics have been studied for the solution of the minimum vertex cover problem, in particular for sparse random instances according to the G(n,c/n) model, where c>0 is a constant. Methods from statistical physics suggest that the problem is easy if c...
Scheduling to minimize average completion time: Off-line and on-line algorithms
Hall, L.A. [Johns Hopkins Univ., Baltimore, MD (United States); Shmoys, D.B. [Cornell Univ., Ithaca, NY (United States); Wein, J. [Polytechnic Univ., Brooklyn, NY (United States)
1996-12-31
Time-indexed linear programming formulations have recently received a great deal of attention for their practical effectiveness in solving a number of single-machine scheduling problems. We show that these formulations are also an important tool in the design of approximation algorithms with good worst-case performance guarantees. We give simple new rounding techniques to convert an optimal fractional solution into a feasible schedule for which we can prove a constant-factor performance guarantee, thereby giving the first theoretical evidence of the strength of these relaxations. Specifically, we consider the problem of minimizing the total weighted job completion time on a single machine subject to precedence constraints, and give a polynomial-time (4 + {epsilon})-approximation algorithm, for any {epsilon} > 0; the best previously known guarantee for this problem was superlogarithmic. With somewhat larger constants, we also show how to extend this result to the case with release date constraints, and still more generally, to the case with m identical parallel machines. We give two other techniques for problems in which there are release dates, but no precedence constraints: the first is based on other new LP rounding algorithms, whereas the second is a general framework for designing on-line algorithms to minimize the total weighted completion time.
Throughput Optimal On-Line Algorithms for Advanced Resource Reservation in Ultra High-Speed Networks
Cohen, Reuven; Starobinski, David
2007-01-01
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual numb...
Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma
2015-04-21
Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.
Juan Pardo
2015-04-01
Full Text Available Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.
Space resection model calculation based on Random Sample Consensus algorithm
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Zhou, Hua; Su, Yang; Wang, Rong; Zhu, Yong; Shen, Huiping; Pu, Tao; Wu, Chuanxin; Zhao, Jiyong; Zhang, Baofu; Xu, Zhiyong
2017-10-01
Online reconstruction of a time-variant quantum state from the encoding/decoding results of quantum communication is addressed by developing a method of evolution reconstruction from a single measurement record with random time intervals. A time-variant two-dimensional state is reconstructed on the basis of recovering its expectation value functions of three nonorthogonal projectors from a random single measurement record, which is composed from the discarded qubits of the six-state protocol. The simulated results prove that our method is robust to typical metro quantum channels. Our work extends the Fourier-based method of evolution reconstruction from the version for a regular single measurement record with equal time intervals to a unified one, which can be applied to arbitrary single measurement records. The proposed protocol of evolution reconstruction runs concurrently with the one of quantum communication, which can facilitate the online quantum tomography.
Two Factor Authentications Using One Time Random Password for Secure Online Transaction
G. Umamaheswari; Dr.A.Kangaiammal; K.K.Kavitha
2015-01-01
The concepts of secure transactions are essential for almost all online transaction. Generally such methodologies were adapted in recent days using one time password. The one time password is a random password generated by the server send to the user for their person authentication access. In contract with the traditional approach the work addresses the concept of two factor authentication for accessing and approving the one time password by the legitimate user. This works on all platforms an...
随机算法的一般性原理%The General Principles of Randomized Algorithms
贺红; 马绍汉
2002-01-01
The last decade has witnessed a tremendous growth in the area of randomized algorithms.During this period,randomized algorithms went form being a tool in computational number theory to finding widespread application in many types of algorithms.Two benefits of randomization have spearheaded this growth:simplicity and speed.For many applications,a randomized algorithm is the simplest algorithm availble,or the fastest,or both.A handful of general principles lie at the heart of almost all randomized algorithms,despite the multitude of areas in which they find application.We bridfly survey these here in order to draw about the method of studying randomized algorithms.
The Value of Online Algorithms to Predict T-Cell Ligands Created by Genetic Variants.
van der Lee, Dyantha I; Pont, Margot J; Falkenburg, J H Frederik; Griffioen, Marieke
2016-01-01
Allogeneic stem cell transplantation can be a curative treatment for hematological malignancies. After HLA-matched allogeneic stem cell transplantation, beneficial anti-tumor immunity as well as detrimental side-effects can develop due to donor-derived T-cells recognizing polymorphic peptides that are presented by HLA on patient cells. Polymorphic peptides on patient cells that are recognized by specific T-cells are called minor histocompatibility antigens (MiHA), while the respective peptides in donor cells are allelic variants. MiHA can be identified by reverse strategies in which large sets of peptides are screened for T-cell recognition. In these strategies, selection of peptides by prediction algorithms may be relevant to increase the efficiency of MiHA discovery. We investigated the value of online prediction algorithms for MiHA discovery and determined the in silico characteristics of 68 autosomal HLA class I-restricted MiHA that have been identified as natural ligands by forward strategies in which T-cells from in vivo immune responses after allogeneic stem cell transplantation are used to identify the antigen. Our analysis showed that HLA class I binding was accurately predicted for 87% of MiHA of which a relatively large proportion of peptides had strong binding affinity (56%). Weak binding affinity was also predicted for a considerable number of antigens (31%) and the remaining 13% of MiHA were not predicted as HLA class I binding peptides. Besides prediction for HLA class I binding, none of the other online algorithms significantly contributed to MiHA characterization. Furthermore, we demonstrated that the majority of MiHA do not differ from their allelic variants in in silico characteristics, suggesting that allelic variants can potentially be processed and presented on the cell surface. In conclusion, our analyses revealed the in silico characteristics of 68 HLA class I-restricted MiHA and explored the value of online algorithms to predict T
Fan Wu
2017-01-01
Full Text Available Trajectory simplification has become a research hotspot since it plays a significant role in the data preprocessing, storage, and visualization of many offline and online applications, such as online maps, mobile health applications, and location-based services. Traditional heuristic-based algorithms utilize greedy strategy to reduce time cost, leading to high approximation error. An Optimal Trajectory Simplification Algorithm based on Graph Model (OPTTS is proposed to obtain the optimal solution in this paper. Both min-# and min-ε problems are solved by the construction and regeneration of the breadth-first spanning tree and the shortest path search based on the directed acyclic graph (DAG. Although the proposed OPTTS algorithm can get optimal simplification results, it is difficult to apply in real-time services due to its high time cost. Thus, a new Online Trajectory Simplification Algorithm based on Directed Acyclic Graph (OLTS is proposed to deal with trajectory stream. The algorithm dynamically constructs the breadth-first spanning tree, followed by real-time minimizing approximation error and real-time output. Experimental results show that OPTTS reduces the global approximation error by 82% compared to classical heuristic methods, while OLTS reduces the error by 77% and is 32% faster than the traditional online algorithm. Both OPTTS and OLTS have leading superiority and stable performance on different datasets.
Mathieu, Claire; Schudy, Warren
2010-01-01
We study the online clustering problem where data items arrive in an online fashion. The algorithm maintains a clustering of data items into similarity classes. Upon arrival of v, the relation between v and previously arrived items is revealed, so that for each u we are told whether v is similar to u. The algorithm can create a new cluster for v and merge existing clusters. When the objective is to minimize disagreements between the clustering and the input, we prove that a natural greedy algorithm is O(n)-competitive, and this is optimal. When the objective is to maximize agreements between the clustering and the input, we prove that the greedy algorithm is .5-competitive; that no online algorithm can be better than .834-competitive; we prove that it is possible to get better than 1/2, by exhibiting a randomized algorithm with competitive ratio .5+c for a small positive fixed constant c.
The Application of Imperialist Competitive Algorithm for Fuzzy Random Portfolio Selection Problem
EhsanHesamSadati, Mir; Bagherzadeh Mohasefi, Jamshid
2013-10-01
This paper presents an implementation of the Imperialist Competitive Algorithm (ICA) for solving the fuzzy random portfolio selection problem where the asset returns are represented by fuzzy random variables. Portfolio Optimization is an important research field in modern finance. By using the necessity-based model, fuzzy random variables reformulate to the linear programming and ICA will be designed to find the optimum solution. To show the efficiency of the proposed method, a numerical example illustrates the whole idea on implementation of ICA for fuzzy random portfolio selection problem.
Prediction of PKCθ Inhibitory Activity Using the Random Forest Algorithm
Shuwei Zhang
2010-09-01
Full Text Available This work is devoted to the prediction of a series of 208 structurally diverse PKCθ inhibitors using the Random Forest (RF based on the Mold2 molecular descriptors. The RF model was established and identified as a robust predictor of the experimental pIC50 values, producing good external R2pred of 0.72, a standard error of prediction (SEP of 0.45, for an external prediction set of 51 inhibitors which were not used in the development of QSAR models. By using the RF built-in measure of the relative importance of the descriptors, an important predictor—the number of group donor atoms for H-bonds (with N and O―has been identified to play a crucial role in PKCθ inhibitory activity. We hope that the developed RF model will be helpful in the screening and prediction of novel unknown PKCθ inhibitory activity.
Submicron structure random field on granular soil material with retinex algorithm optimization
Liang, Yu; Tao, Chenyuan; Zhou, Bingcheng; Huang, Shuai; Huang, Linchong
2017-06-01
In this paper, a Retinex scale optimized image enhancement algorithm is proposed, which can enhance the micro vision image and eliminate the influence of the uneven illumination. Based on that, a random geometric model of the microstructure of granular materials is established with Monte-Carlo method, the numerical simulation including consolidation process of granular materials is compared with the experimental data. The results have proved that the random field method with Retinex image enhancement algorithm is effective, the image of microstructure of granular materials becomes clear and the contrast ratio is improved, after using Retinex image enhancement algorithm to enhance the CT image. The fidelity of enhanced image is higher than that dealing with other method, which have explained that the algorithm can preserve the microstructure information of the image well. The result of numerical simulation is similar with the one obtained from conventional three axis consolidation test, which proves that the simulation result is reliable.
Autoclassification of the Variable 3XMM Sources Using the Random Forest Machine Learning Algorithm
Farrell, Sean A; Lo, Kitty K
2015-01-01
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ~92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ~95%. Manual investigation of a random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that wer...
Naveed Khan
2016-10-01
Full Text Available In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA algorithm by using a genetic algorithm (GA to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%.
A Better Memoryless Online Algorithm for FIFO Buffering Packets with Two Values
Li, Fei
2010-01-01
We consider scheduling weighted packets in a capacity-bounded buffer. In this model, there is a buffer with a limited capacity B such that at any time, the buffer cannot accommodate more than B packets. Packets arrive over time. Each packet has a non-negative real value. Packets do not expire and they leave the buffer only because either we send them or we drop them. The packets that have left the buffer will not be reconsidered for delivery any more. In each time step, at most one packet in the buffer can be sent. The order in which the packets are sent should comply with the order of their arriving time. The objective is to maximize the total value of the packets sent in an online manner. In this paper, we study a variant of this model in which packets have value 1 or alpha > 1. We present a deterministic memoryless 1.305-competitive algorithm, improving the previously best known result 1.544 (Kesselman and Mansour. Journal of Algorithms 2003). In designing our algorithm, we apply a few new ideas. We do not...
Lancee, J.; van Straten, A.; Morina, N.; Kaldo, V.; Kamphuis, J.H.
2016-01-01
Study Objectives: To compare the efficacy of guided online and individual face-to-face cognitive behavioral treatment for insomnia (CBT-I) to a wait-list condition. Methods: A randomized controlled trial comparing three conditions: guided online; face-to-face; wait-list. Posttest measurements were a
On the Selection of Random Numbers in the ElGamal Algorithm
无
2006-01-01
The ElGamal algorithm, which can be used for both signature and encryption, is of importance in public-key cryptosystems. However, there has arisen an issue that different criteria of selecting a random number are used for the same algorithm. In the aspects of the sufficiency, necessity, security and computational overhead of parameter selection, this paper analyzes these criteria in a comparative manner and points out the insecurities in some textbook cryptographic schemes. Meanwhile, in order to enhance security a novel generalization of the ElGamal signature scheme is made by expanding the range of selecting random numbers at an acceptable cost of additional computation, and its feasibility is demonstrated.
A Sweep-Plane Algorithm for Generating Random Tuples in Simple Polytopes
Leydold, Josef; Hörmann, Wolfgang
1997-01-01
A sweep-plane algorithm by Lawrence for convex polytope computation is adapted to generate random tuples on simple polytopes. In our method an affine hyperplane is swept through the given polytope until a random fraction (sampled from a proper univariate distribution) of the volume of the polytope is covered. Then the intersection of the plane with the polytope is a simple polytope with smaller dimension. In the second part we apply this method to construct a black-box algorithm for log-conca...
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
A Random-Walk Based Privacy-Preserving Access Control for Online Social Networks
You-sheng Zhou
2016-02-01
Full Text Available Online social networks are popularized with people to connect friends, share resources etc. Meanwhile, the online social networks always suffer the problem of privacy exposure. The existing methods to prevent exposure are to enforce access control provided by the social network providers or social network users. However, those enforcements are impractical since one of essential goal of social network application is to share updates freely and instantly. To better the security and availability in social network applications, a novel random walking based access control of social network is proposed in this paper. Unlike using explicit attribute based match in the existing schemes, the results from random walking are employed to securely compute L1 distance between two social network users in the presented scheme, which not only avoids the leakage of private attributes, but also enables each social network user to define access control policy independently. The experimental results show that the proposed scheme can facilitate the access control for online social network.
A Quantum Adiabatic Evolution Algorithm Applied to Random Instances of an NP-Complete Problem
Farhi, E; Gutmann, S; Lapan, J; Lundgren, A; Preda, D; Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam; Lapan, Joshua; Lundgren, Andrew; Preda, Daniel
2001-01-01
A quantum system will stay near its instantaneous ground state if the Hamiltonian that governs its evolution varies slowly enough. This quantum adiabatic behavior is the basis of a new class of algorithms for quantum computing. We test one such algorithm by applying it to randomly generated, hard, instances of an NP-complete problem. For the small examples that we can simulate, the quantum adiabatic algorithm works well, and provides evidence that quantum computers (if large ones can be built) may be able to outperform ordinary computers on hard sets of instances of NP-complete problems.
Pontefisso, Alessandro; Zappalorto, Michele; Quaresimin, Marino
2016-01-01
In this work, a study of the Random Sequential Absorption (RSA) algorithm in the generation of nanoplatelet Volume Elements (VEs) is carried out. The effect of the algorithm input parameters on the reinforcement distribution is studied through the implementation of statistical tools, showing...... that the platelet distribution is systematically affected by these parameters. The consequence is that a parametric analysis of the VE input parameters may be biased by hidden differences in the filler distribution. The same statistical tools used in the analysis are implemented in a modified RSA algorithm...
Fast vectorized algorithm for the Monte Carlo Simulation of the Random Field Ising Model
Rieger, H
1992-01-01
An algoritm for the simulation of the 3--dimensional random field Ising model with a binary distribution of the random fields is presented. It uses multi-spin coding and simulates 64 physically different systems simultaneously. On one processor of a Cray YMP it reaches a speed of 184 Million spin updates per second. For smaller field strength we present a version of the algorithm that can perform 242 Million spin updates per second on the same machine.
Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.
2016-09-01
There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21
Lapko, A. V.; Lapko, V. A.; Yuronen, E. A.
2016-11-01
The new technique of testing of hypothesis of random variables independence is offered. Its basis is made by nonparametric algorithm of pattern recognition. The considered technique doesn't demand sampling of area of values of random variables.
An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers
Balman, Mehmet; Kosar, Tevfik
2010-05-20
Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.
Randomized Algorithms for Analysis and Control of Uncertain Systems With Applications
Tempo, Roberto; Dabbene, Fabrizio
2013-01-01
The presence of uncertainty in a system description has always been a critical issue in control. The main objective of Randomized Algorithms for Analysis and Control of Uncertain Systems, with Applications (Second Edition) is to introduce the reader to the fundamentals of probabilistic methods in the analysis and design of systems subject to deterministic and stochastic uncertainty. The approach propounded by this text guarantees a reduction in the computational complexity of classical control algorithms and in the conservativeness of standard robust control techniques. The second edition has been thoroughly updated to reflect recent research and new applications with chapters on statistical learning theory, sequential methods for control and the scenario approach being completely rewritten. Features: · self-contained treatment explaining Monte Carlo and Las Vegas randomized algorithms from their genesis in the principles of probability theory to their use for system analysis; · ...
A Simulation Optimization Algorithm for CTMDPs Based on Randomized Stationary Policies1）
TANGHao; XIHong-Sheng; YINBao-Qun
2004-01-01
Based on the theory of Markov performance potentials and neuro-dynamic programming(NDP) methodology, we study simulation optimization algorithm for a class of continuous timeMarkov decision processes (CTMDPs) under randomized stationary policies. The proposed algo-rithm will estimate the gradient of average cost performance measure with respect to policy param-eters by transforming a continuous time Markov process into a uniform Markov chain and simula-ting a single sample path of the chain. The goal is to look for a suboptimal randomized stationarypolicy. The algorithm derived here can meet the needs of performance optimization of many diffi-cult systems with large-scale state space. Finally, a numerical example for a controlled Markovprocess is provided.
Prajit Limsaiprom
2014-07-01
Full Text Available The rise of the Internet accelerates the creation of various large-scale online social networks, which can be described the relationships and activities between human beings. The online social networks relationships in real world are too big to present with useful information to identify the criminal or cyber-attacks. This research proposed new information security analytic model for online social networks, which called Security Visualization Analytics (SVA Model. SVA Model used the set of algorithms (1 Graph-based Structure algorithm to analyze the key factors of influencing nodes about density, centrality and the cohesive subgroup to identify the influencing nodes of anomaly and attack patterns (2 Supervised Learning with oneR classification algorithm was used to predict new links from such influencing nodes in online social networks on discovering surprising links in the existing ones of influencing nodes, which nodes in online social networks will be linked next from the attacked influencing nodes to monitor the risk. The results showed 42 influencing nodes of anomaly and attack patterns and can be predict 31 new links from such nodes were achieved by SVA Model with the accuracy of confidence level 95.0%. The new proposed model and results illustrated SVA Model was significance analysis. Such understanding can lead to efficient implementation of tools to links prediction in online social networks. They could be applied as a guide to further investigate of social networks behavior to improve the security model and notify the risk, computer viruses or cyber-attacks for online social networks in advance.
A subexponential lower bound for the Random Facet algorithm for Parity Games
Friedmann, Oliver; Hansen, Thomas Dueholm; Zwick, Uri
2011-01-01
of turn-based Stochastic Mean Payoff Games. It is a major open problem whether these game families can be solved in polynomial time. The currently theoretically fastest algorithms for the solution of all these games are adaptations of the randomized algorithms of Kalai and of Matouˇsek, Sharir and Welzl......Parity Games form an intriguing family of infinite duration games whose solution is equivalent to the solution of important problems in automatic verification and automata theory. They also form a very natural subclass of Deterministic Mean Payoff Games, which in turn is a very natural subclass...... for LP-type problems, an abstract generalization of linear programming. The expected running time of both algorithms is subexponential in the size of the game, i.e., 2O(√n log n), where n is the number of vertices in the game. We focus in this paper on the algorithm of Matouˇsek, Sharir and Welzl...
Biased Random-Key Genetic Algorithms for the Winner Determination Problem in Combinatorial Auctions.
de Andrade, Carlos Eduardo; Toso, Rodrigo Franco; Resende, Mauricio G C; Miyazawa, Flávio Keidi
2015-01-01
In this paper we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.
Miller, Ross H; Gillette, Jason C; Derrick, Timothy R; Caldwell, Graham E
2009-04-01
Muscle forces during locomotion are often predicted using static optimisation and SQP. SQP has been criticised for over-estimating force magnitudes and under-estimating co-contraction. These problems may be related to SQP's difficulty in locating the global minimum to complex optimisation problems. Algorithms designed to locate the global minimum may be useful in addressing these problems. Muscle forces for 18 flexors and extensors of the lower extremity were predicted for 10 subjects during the stance phase of running. Static optimisation using SQP and two random search (RS) algorithms (a genetic algorithm and simulated annealing) estimated muscle forces by minimising the sum of cubed muscle stresses. The RS algorithms predicted smaller peak forces (42% smaller on average) and smaller muscle impulses (46% smaller on average) than SQP, and located solutions with smaller cost function scores. Results suggest that RS may be a more effective tool than SQP for minimising the sum of cubed muscle stresses in static optimisation.
Wade, Shari L; Walz, Nicolay C; Carey, JoAnne; McMullen, Kendra M; Cass, Jennifer; Mark, Erin; Yeates, Keith Owen
2012-11-01
To examine the results of a randomized clinical trial (RCT) of Teen Online Problem Solving (TOPS), an online problem solving therapy model, in increasing problem-solving skills and decreasing depressive symptoms and global distress for caregivers of adolescents with traumatic brain injury (TBI). Families of adolescents aged 11-18 who sustained a moderate to severe TBI between 3 and 19 months earlier were recruited from hospital trauma registries. Participants were assigned to receive a web-based, problem-solving intervention (TOPS, n = 20), or access to online resources pertaining to TBI (Internet Resource Comparison; IRC; n = 21). Parent report of problem solving skills, depressive symptoms, global distress, utilization, and satisfaction were assessed pre- and posttreatment. Groups were compared on follow-up scores after controlling for pretreatment levels. Family income was examined as a potential moderator of treatment efficacy. Improvement in problem solving was examined as a mediator of reductions in depression and distress. Forty-one participants provided consent and completed baseline assessments, with follow-up assessments completed on 35 participants (16 TOPS and 19 IRC). Parents in both groups reported a high level of satisfaction with both interventions. Improvements in problem solving skills and depression were moderated by family income, with caregivers of lower income in TOPS reporting greater improvements. Increases in problem solving partially mediated reductions in global distress. Findings suggest that TOPS may be effective in improving problem solving skills and reducing depressive symptoms for certain subsets of caregivers in families of adolescents with TBI.
Bai, Danyu
2015-08-01
This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.
Engineering Online and In-Person Social Networks for Physical Activity: A Randomized Trial.
Rovniak, Liza S; Kong, Lan; Hovell, Melbourne F; Ding, Ding; Sallis, James F; Ray, Chester A; Kraschnewski, Jennifer L; Matthews, Stephen A; Kiser, Elizabeth; Chinchilli, Vernon M; George, Daniel R; Sciamanna, Christopher N
2016-12-01
Social networks can influence physical activity, but little is known about how best to engineer online and in-person social networks to increase activity. The purpose of this study was to conduct a randomized trial based on the Social Networks for Activity Promotion model to assess the incremental contributions of different procedures for building social networks on objectively measured outcomes. Physically inactive adults (n = 308, age, 50.3 (SD = 8.3) years, 38.3 % male, 83.4 % overweight/obese) were randomized to one of three groups. The Promotion group evaluated the effects of weekly emailed tips emphasizing social network interactions for walking (e.g., encouragement, informational support); the Activity group evaluated the incremental effect of adding an evidence-based online fitness walking intervention to the weekly tips; and the Social Networks group evaluated the additional incremental effect of providing access to an online networking site for walking as well as prompting walking/activity across diverse settings. The primary outcome was mean change in accelerometer-measured moderate-to-vigorous physical activity (MVPA), assessed at 3 and 9 months from baseline. Participants increased their MVPA by 21.0 min/week, 95 % CI [5.9, 36.1], p = .005, at 3 months, and this change was sustained at 9 months, with no between-group differences. Although the structure of procedures for targeting social networks varied across intervention groups, the functional effect of these procedures on physical activity was similar. Future research should evaluate if more powerful reinforcers improve the effects of social network interventions. The trial was registered with the ClinicalTrials.gov (NCT01142804).
An online spaced-education game for global continuing medical education: a randomized trial.
Kerfoot, B Price; Baker, Harley
2012-07-01
To assess the efficacy of a "spaced-education" game as a method of continuing medical education (CME) among physicians across the globe. The efficacy of educational games for the CME has yet to be established. We created a novel online educational game by incorporating game mechanics into "spaced education" (SE), an evidence-based method of online CME. This 34-week randomized trial enrolled practicing urologists across the globe. The SE game consisted of 40 validated multiple-choice questions and explanations on urology clinical guidelines. Enrollees were randomized to 2 cohorts: cohort A physicians were sent 2 questions via an automated e-mail system every 2 days, and cohort B physicians were sent 4 questions every 4 days. Adaptive game mechanics re-sent the questions in 12 or 24 days if answered incorrectly and correctly, respectively. Questions expired if not answered on time (appointment dynamic). Physicians retired questions by answering each correctly twice-in-a-row (progression dynamic). Competition was fostered by posting relative performance among physicians. Main outcome measures were baseline scores (percentage of questions answered correctly upon initial presentation) and completion scores (percentage of questions retired). A total of 1470 physicians from 63 countries enrolled. Median baseline score was 48% (interquartile range [IQR] 17) and, in multivariate analyses, was found to vary significantly by region (Cohen dmax = 0.31, P = 0.001) and age (dmax = 0.41, P games. An online SE game can substantially improve guidelines knowledge and is a well-accepted method of global CME delivery.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Nan Lin
Full Text Available Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis.
A wavelet-based ECG delineation algorithm for 32-bit integer online processing
Chiari Lorenzo
2011-04-01
Full Text Available Abstract Background Since the first well-known electrocardiogram (ECG delineator based on Wavelet Transform (WT presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS or floating point algebra, which are computationally demanding. Methods This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. Results The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats. The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. Conclusions The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra.
An Effective Randomized QoS Routing Algorithm on Networks with Inaccurate Parameters
WANG Jianxin(王建新); CHEN Jian'er(陈建二); CHEN Songqiao(陈松乔)
2002-01-01
This paper develops an effective randomized on-demand QoS routing algorithm on networks with inaccurate link-state information. Several new techniques are proposed in the algorithm. First, the maximum safety rate and the minimum delay for each node in the network are pre-computed, which simplify the network complexity and provide the routing process with useful information. The routing process is dynamically directed by the safety rate and delay of the partial routing path developed so far and by the maximum safety rate and the minimum delay of the next node. Randomness is used at the link level and depends dynamically on the routing configuration. This provides great flexibility for the routing process, prevents the routing process from overusing certain fixed routing paths, and adequately balances the safety rate and delay of the routing path. A network testing environment has been established and five parameters are introduced to measure the performance of QoS routing algorithms.Experimental results demonstrate that in terms of the proposed parameters, the algorithm outperforms existing QoS algorithms appearing in the literature.
Kalaitzaki, Eleftheria; White, Ian R; Khadjesari, Zarnie; Murray, Elizabeth; Linke, Stuart; Thompson, Simon G; Godfrey, Christine; Wallace, Paul
2011-01-01
Background There has been limited study of factors influencing response rates and attrition in online research. Online experiments were nested within the pilot (study 1, n = 3780) and main trial (study 2, n = 2667) phases of an evaluation of a Web-based intervention for hazardous drinkers: the Down Your Drink randomized controlled trial (DYD-RCT). Objectives The objective was to determine whether differences in the length and relevance of questionnaires can impact upon loss to follow-up in online trials. Methods A randomized controlled trial design was used. All participants who consented to enter DYD-RCT and completed the primary outcome questionnaires were randomized to complete one of four secondary outcome questionnaires at baseline and at follow-up. These questionnaires varied in length (additional 23 or 34 versus 10 items) and relevance (alcohol problems versus mental health). The outcome measure was the proportion of participants who completed follow-up at each of two follow-up intervals: study 1 after 1 and 3 months and study 2 after 3 and 12 months. Results At all four follow-up intervals there were no significant effects of additional questionnaire length on follow-up. Randomization to the less relevant questionnaire resulted in significantly lower rates of follow-up in two of the four assessments made (absolute difference of 4%, 95% confidence interval [CI] 0%-8%, in both study 1 after 1 month and in study 2 after 12 months). A post hoc pooled analysis across all four follow-up intervals found this effect of marginal statistical significance (unadjusted difference, 3%, range 1%-5%, P = .01; difference adjusted for prespecified covariates, 3%, range 0%-5%, P = .05). Conclusions Apparently minor differences in study design decisions may have a measurable impact on attrition in trials. Further investigation is warranted of the impact of the relevance of outcome measures on follow-up rates and, more broadly, of the consequences of what we ask participants to
Somdip Dey
2012-06-01
Full Text Available Due to tremendous growth in communication technology, now it is a real problem / challenge to send some confidential data / information through communication network. For this reason, Nath et al. developed several information security systems, combining cryptography and steganography together, and the present method, ASA_QR, is also one of them. In the present paper, the authors present a new steganography algorithm to hide any small encrypted secret message inside QR CodeTM , which is then randomized and then, finally embed that randomized QR Code inside some common image. Quick Response Codes (or QR Codes are a type of two-dimensional matrix barcodes used for encoding information. It has become very popular recently for its high storage capacity. The present method is ASA_QR is a combination of strong encryption algorithm and data hiding in two stages to make the entire process extremely hard to break. Here, the secret message is encrypted first and hide it in a QR CodeTM and then again that QR CodeTM is embed in a cover file (picture file in random manner, using the standard method of steganography. In this way the data, which is secured, is almost impossible to be retrieved without knowing the cryptography key, steganography password and the exact unhide method. For encrypting data The authors used a method developed by Nath et al i.e. TTJSA, which is based on generalized modified Vernam Cipher, MSA and NJJSA method; and from the cryptanalysis it is seen that TTJSA is free from any standard cryptographic attacks, like differential attack, plain-text attack or any brute force attack. After encrypting the data using TTJSA,the authors have used standard steganographic method To hide data inside some host file. The present method may be used for sharing secret key, password, digital signature etc.
Extension of the SAEM algorithm for nonlinear mixed models with 2 levels of random effects.
Panhard, Xavière; Samson, Adeline
2009-01-01
This article focuses on parameter estimation of multilevel nonlinear mixed-effects models (MNLMEMs). These models are used to analyze data presenting multiple hierarchical levels of grouping (cluster data, clinical trials with several observation periods, ...). The variability of the individual parameters of the regression function is thus decomposed as a between-subject variability and higher levels of variability (e.g. within-subject variability). We propose maximum likelihood estimates of parameters of those MNLMEMs with 2 levels of random effects, using an extension of the stochastic approximation version of expectation-maximization (SAEM)-Monte Carlo Markov chain algorithm. The extended SAEM algorithm is split into an explicit direct expectation-maximization (EM) algorithm and a stochastic EM part. Compared to the original algorithm, additional sufficient statistics have to be approximated by relying on the conditional distribution of the second level of random effects. This estimation method is evaluated on pharmacokinetic crossover simulated trials, mimicking theophylline concentration data. Results obtained on those data sets with either the SAEM algorithm or the first-order conditional estimates (FOCE) algorithm (implemented in the nlme function of R software) are compared: biases and root mean square errors of almost all the SAEM estimates are smaller than the FOCE ones. Finally, we apply the extended SAEM algorithm to analyze the pharmacokinetic interaction of tenofovir on atazanavir, a novel protease inhibitor, from the Agence Nationale de Recherche sur le Sida 107-Puzzle 2 study. A significant decrease of the area under the curve of atazanavir is found in patients receiving both treatments.
A Particle Swarm Optimization Algorithm with Variable Random Functions and Mutation
ZHOU Xiao-Jun; YANG Chun-Hua; GUI Wei-Hua; DONG Tian-Xue
2014-01-01
The convergence analysis of the standard particle swarm optimization (PSO) has shown that the changing of random functions, personal best and group best has the potential to improve the performance of the PSO. In this paper, a novel strategy with variable random functions and polynomial mutation is introduced into the PSO, which is called particle swarm optimization algorithm with variable random functions and mutation (PSO-RM). Random functions are adjusted with the density of the population so as to manipulate the weight of cognition part and social part. Mutation is executed on both personal best particle and group best particle to explore new areas. Experiment results have demonstrated the effectiveness of the strategy.
A. Cancelier
Full Text Available Abstract This study used a predictive controller based on an empirical nonlinear model comprising a three-layer feedforward neural network for temperature control of the suspension polymerization process. In addition to the offline training technique, an algorithm was also analyzed for online adaptation of its parameters. For the offline training, the network was statically trained and the genetic algorithm technique was used in combination with the least squares method. For online training, the network was trained on a recurring basis and only the technique of genetic algorithms was used. In this case, only the weights and bias of the output layer neuron were modified, starting from the parameters obtained from the offline training. From the experimental results obtained in a pilot plant, a good performance was observed for the proposed control system, with superior performance for the control algorithm with online adaptation of the model, particularly with respect to the presence of off-set for the case of the fixed parameters model.
Estimation of biomass in wheat using random forest regression algorithm and remote sensing data
Li'ai Wang; Xudong Zhou; Xinkai Zhu; Zhaodi Dong; Wenshan Guo
2016-01-01
Wheat biomass can be estimated using appropriate spectral vegetation indices. However, the accuracy of estimation should be further improved for on-farm crop management. Previous studies focused on developing vegetation indices, however limited research exists on modeling algorithms. The emerging Random Forest (RF) machine-learning algorithm is regarded as one of the most precise prediction methods for regression modeling. The objectives of this study were to (1) investigate the applicability of the RF regression algorithm for remotely estimating wheat biomass, (2) test the performance of the RF regression model, and (3) compare the performance of the RF algorithm with support vector regression (SVR) and artificial neural network (ANN) machine-learning algorithms for wheat biomass estimation. Single HJ-CCD images of wheat from test sites in Jiangsu province were obtained during the jointing, booting, and anthesis stages of growth. Fifteen vegetation indices were calculated based on these images. In-situ wheat above-ground dry biomass was measured during the HJ-CCD data acquisition. The results showed that the RF model produced more accurate estimates of wheat biomass than the SVR and ANN models at each stage, and its robustness is as good as SVR but better than ANN. The RF algorithm provides a useful exploratory and predictive tool for estimating wheat biomass on a large scale in Southern China.
Estimation of biomass in wheat using random forest regression algorithm and remote sensing data
Li’ai Wang; Xudong Zhou; Xinkai Zhu; Zhaodi Dong; Wenshan Guo
2016-01-01
Wheat biomass can be estimated using appropriate spectral vegetation indices.However,the accuracy of estimation should be further improved for on-farm crop management.Previous studies focused on developing vegetation indices,however limited research exists on modeling algorithms.The emerging Random Forest(RF) machine-learning algorithm is regarded as one of the most precise prediction methods for regression modeling.The objectives of this study were to(1) investigate the applicability of the RF regression algorithm for remotely estimating wheat biomass,(2) test the performance of the RF regression model,and(3) compare the performance of the RF algorithm with support vector regression(SVR) and artificial neural network(ANN) machine-learning algorithms for wheat biomass estimation.Single HJ-CCD images of wheat from test sites in Jiangsu province were obtained during the jointing,booting,and anthesis stages of growth.Fifteen vegetation indices were calculated based on these images.In-situ wheat above-ground dry biomass was measured during the HJ-CCD data acquisition.The results showed that the RF model produced more accurate estimates of wheat biomass than the SVR and ANN models at each stage,and its robustness is as good as SVR but better than ANN.The RF algorithm provides a useful exploratory and predictive tool for estimating wheat biomass on a large scale in Southern China.
Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.
Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z
2007-08-15
Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
Maire, Sylvain; Simon, Martin
2015-12-01
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance of the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.
程传福; 刘曼; 滕树云; 宋洪胜; 陈建平; 徐至展
2003-01-01
A method for the extracting the correlation functions of random surfaces is proposed by using the image speckle intensity. Theoretically, we analyse the integral expression of average intensity of the image speckles, and compare it with the pair of Fourier-Bessel-transform-and-the-inversion of the exponential function of the height-height correlation function of the random surfaces. Then the algorithm is proposed numerically to complement the lacking Bessel function factor in the expression of the average speckle intensity, which changes the intensity data into the pair of the Fourier-Bessel-transform. Experimentally, we measure the average image speckle intensities versus the radius of the filtering aperture in the 4 f system and extract the height-height correlation function by using the proposed algorithm. The results of the practical measurements for three surface samples and the comparison with those by atomic force microscopy validate the feasibility of this method.
Design and Analysis of Randomized and Approximation Algorithms (Dagstuhl Seminar 11241)
Dyer, Martin; Feige, Uriel; Frieze, Alan M.; Karpinski, Marek
2011-01-01
The Dagstuhl Seminar on ``Design and Analysis of Randomized and Approximation Algorithms'' (Seminar 11241) was held at Schloss Dagstuhl between June 13--17, 2011. There were 26 regular talks and several informal and open problem session contributions presented during this seminar. Abstracts of the presentations have been put together in this seminar proceedings document together with some links to extended abstracts and full papers.
08201 Abstracts Collection -- Design and Analysis of Randomized and Approximation Algorithms
Dyer, Martin E.; Jerrum, Mark; Karpinski, Marek
2008-01-01
From 11.05.08 to 16.05.08, the Dagstuhl Seminar 08201 ``Design and Analysis of Randomized and Approximation Algorithms'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research work, and ongoing work and open problems were discussed. Abstracts of the presentations which were given during the seminar as well as abstracts of seminar results and ideas are put together in ...
05201 Abstracts Collection -- Design and Analysis of Randomized and Approximation Algorithms
Dyer, Martin; Jerrum, Mark; Karpinski, Marek
2005-01-01
From 15.05.05 to 20.05.05, the Dagstuhl Seminar 05201 ``Design and Analysis of Randomized and Approximation Algorithms'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The fir...
Yang Zheng
2009-10-01
Full Text Available Abstract Background Tyrosine sulfation is one of the most important posttranslational modifications. Due to its relevance to various disease developments, tyrosine sulfation has become the target for drug design. In order to facilitate efficient drug design, accurate prediction of sulfotyrosine sites is desirable. A predictor published seven years ago has been very successful with claimed prediction accuracy of 98%. However, it has a particularly low sensitivity when predicting sulfotyrosine sites in some newly sequenced proteins. Results A new approach has been developed for predicting sulfotyrosine sites using the random forest algorithm after a careful evaluation of seven machine learning algorithms. Peptides are formed by consecutive residues symmetrically flanking tyrosine sites. They are then encoded using an amino acid hydrophobicity scale. This new approach has increased the sensitivity by 22%, the specificity by 3%, and the total prediction accuracy by 10% compared with the previous predictor using the same blind data. Meanwhile, both negative and positive predictive powers have been increased by 9%. In addition, the random forest model has an excellent feature for ranking the residues flanking tyrosine sites, hence providing more information for further investigating the tyrosine sulfation mechanism. A web tool has been implemented at http://ecsb.ex.ac.uk/sulfotyrosine for public use. Conclusion The random forest algorithm is able to deliver a better model compared with the Hidden Markov Model, the support vector machine, artificial neural networks, and others for predicting sulfotyrosine sites. The success shows that the random forest algorithm together with an amino acid hydrophobicity scale encoding can be a good candidate for peptide classification.
A randomized algorithm for two-cluster partition of a set of vectors
Kel'manov, A. V.; Khandeev, V. I.
2015-02-01
A randomized algorithm is substantiated for the strongly NP-hard problem of partitioning a finite set of vectors of Euclidean space into two clusters of given sizes according to the minimum-of-the sum-of-squared-distances criterion. It is assumed that the centroid of one of the clusters is to be optimized and is determined as the mean value over all vectors in this cluster. The centroid of the other cluster is fixed at the origin. For an established parameter value, the algorithm finds an approximate solution of the problem in time that is linear in the space dimension and the input size of the problem for given values of the relative error and failure probability. The conditions are established under which the algorithm is asymptotically exact and runs in time that is linear in the space dimension and quadratic in the input size of the problem.
The backtracking survey propagation algorithm for solving random K-SAT problems
Marino, Raffaele; Parisi, Giorgio; Ricci-Tersenghi, Federico
2016-10-01
Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables.
González-Recio, O; Jiménez-Montero, J A; Alenda, R
2013-01-01
In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
Data-Driven Online and Real-Time Combinatorial Optimization
2013-10-30
Problem , the online Traveling Salesman Problem , and variations of the online Quota Hamil- tonian Path Problem and the online Traveling ...has the lowest competitive ratio among all algorithms of this kind. Second, we consider the Online Traveling Salesman Problem , and consider randomized...matroid secretary problem on a partition matroid. 6. Jaillet, P. and X. Lu. “Online Traveling Salesman Problems with Rejection Options”, submitted
陈理; 王克峰; 徐霄羽; 姚平经
2004-01-01
In this contribution we present an online scheduling algorithm for a real world multiproduct batch plant. The overall mixed integer nonlinear programming (MINLP) problem is hierarchically structured into a mixed integer linear programming (MILP) problem first and then a reduced dimensional MINLP problem, which are optimized by mathematical programming (MP) and genetic algorithm (GA) respectively. The basis idea relies on combining MP with GA to exploit their complementary capacity. The key features of the hierarchical model are explained and illustrated with some real world cases from the multiproduct batch plants.
Online games: a novel approach to explore how partial information influences random search processes
Martinez-Garcia, Ricardo; Lopez, Cristobal
2016-01-01
Many natural processes rely on optimizing the success ratio of an underlying search process. We investigate how fluxes of information between individuals and their environment modify the statistical properties of human search strategies. Using an online game, searchers have to find a hidden target whose location is hinted by a surrounding neighborhood. Searches are optimal for intermediate neighborhood sizes; smaller areas are harder to locate while larger ones obscure the location of the target inside it. Although the neighborhood size that minimizes average search times depends on neighborhood geometry, we develop a theoretical framework to predict this value in a general setup. Furthermore, a priori access to information about the landscape turns search strategies into self-adaptive processes in which the trajectory on the board evolves to show a well-defined characteristic jumping length. A family of random-walk models is developed to investigate the non-Markovian nature of the process.
Online games: a novel approach to explore how partial information influences human random searches
Martínez-García, Ricardo; Calabrese, Justin M.; López, Cristóbal
2017-01-01
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.
Online games: a novel approach to explore how partial information influences human random searches
Martínez-García, Ricardo; Calabrese, Justin M.; López, Cristóbal
2017-01-01
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches. PMID:28059115
Ambient awareness: From random noise to digital closeness in online social networks.
Levordashka, Ana; Utz, Sonja
2016-07-01
Ambient awareness refers to the awareness social media users develop of their online network in result of being constantly exposed to social information, such as microblogging updates. Although each individual bit of information can seem like random noise, their incessant reception can amass to a coherent representation of social others. Despite its growing popularity and important implications for social media research, ambient awareness on public social media has not been studied empirically. We provide evidence for the occurrence of ambient awareness and examine key questions related to its content and functions. A diverse sample of participants reported experiencing awareness, both as a general feeling towards their network as a whole, and as knowledge of individual members of the network, whom they had not met in real life. Our results indicate that ambient awareness can develop peripherally, from fragmented information and in the relative absence of extensive one-to-one communication. We report the effects of demographics, media use, and network variables and discuss the implications of ambient awareness for relational and informational processes online.
Batra, Priya; Mangione, Carol M; Cheng, Eric; Steers, W Neil; Nguyen, Tina A; Bell, Douglas; Kuo, Alice A; Gregory, Kimberly D
2017-01-01
To evaluate whether exposure to MyFamilyPlan-a web-based preconception health education module-changes the proportion of women discussing reproductive health with providers at well-woman visits. Cluster randomized controlled trial. One hundred thirty participants per arm distributed among 34 clusters (physicians) required to detect a 20% change in the primary outcome. Urban academic medical center (California). Eligible women were 18 to 45 years old, were English speaking, were nonpregnant, were able to access the Internet, and had an upcoming well-woman visit. E-mail and phone recruitment between September 2015 and May 2016; 292 enrollees randomized. Intervention participants completed the MyFamilyPlan module online 7 to 10 days before a scheduled well-woman visit; control participants reviewed standard online preconception health education materials. The primary outcome was self-reported discussion of reproductive health with the physician at the well-woman visit. Self-reported secondary outcomes were folic acid use, contraceptive method initiation/change, and self-efficacy score. Multilevel multivariate logistic regression. After adjusting for covariates and cluster, exposure to MyFamilyPlan was the only variable significantly associated with an increase in the proportion of women discussing reproductive health with providers (odds ratio: 1.97, 95% confidence interval: 1.22-3.19). Prespecified secondary outcomes were unaffected. MyFamilyPlan exposure was associated with a significant increase in the proportion of women who reported discussing reproductive health with providers and may promote preconception health awareness; more work is needed to affect associated behaviors.
Flow, transport and diffusion in random geometries I: a MLMC algorithm
Canuto, Claudio
2015-01-07
Multilevel Monte Carlo (MLMC) is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrization of the input randomness is not available or too expensive. We propose a general-purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random geoemtry and with random parameters. We make use of the key idea of MLMC, based on different discretization levels, extending it in a more general context, making use of a hierarchy of physical resolution scales, solvers, models and other numerical/geometrical discretization parameters. Modifications of the classical MLMC estimators are proposed to further reduce variance in cases where analytical convergence rates and asymptotic regimes are not available. Spheres, ellipsoids and general convex-shaped grains are placed randomly in the domain with different placing/packing algorithms and the effective properties of the heterogeneous medium are computed. These are, for example, effective diffusivities, conductivities, and reaction rates. The implementation of the Monte-Carlo estimators, the statistical samples and each single solver is done efficiently in parallel.
Halko, Nathan; Tropp, Joel A
2009-01-01
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that \\emph{randomization} offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques offer a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed - either explicitly or implicitly - to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In ...
Lancee, Jaap; van Straten, Annemieke; Morina, Nexhmedin; Kaldo, Viktor; Kamphuis, Jan H
2016-01-01
To compare the efficacy of guided online and individual face-to-face cognitive behavioral treatment for insomnia (CBT-I) to a wait-list condition. A randomized controlled trial comparing three conditions: guided online; face-to-face; wait-list. Posttest measurements were administered to all conditions, along with 3- and 6-mo follow-up assessments to the online and face-to-face conditions. Ninety media-recruited participants meeting the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria for insomnia were randomly allocated to either guided online CBT-I (n = 30), individual face-to-face CBT-I (n = 30), or wait-list (n = 30). At post-assessment, the online (Cohen d = 1.2) and face-to-face (Cohen d = 2.3) intervention groups showed significantly larger treatment effects than the wait-list group on insomnia severity (insomnia severity index). Large treatment effects were also found for the sleep diary estimates (except for total sleep time), and anxiety and depression measures (for depression only in the face-to-face condition). Face-to-face treatment yielded a statistically larger treatment effect (Cohen d = 0.9) on insomnia severity than the online condition at all time points. In addition, a moderate differential effect size favoring face-to-face treatment emerged at the 3- and 6-mo follow-up on all sleep diary estimates. Face-to-face treatment further outperformed online treatment on depression and anxiety outcomes. These data show superior performance of face-to-face treatment relative to online treatment. Yet, our results also suggest that online treatment may offer a potentially cost-effective alternative to and complement face-to-face treatment. Clinicaltrials.gov, NCT01955850. A commentary on this article appears in this issue on page 13. © 2016 Associated Professional Sleep Societies, LLC.
Online gambling's moderators: how effective? Study protocol for a randomized controlled trial.
Caillon, Julie; Grall-Bronnec, Marie; Hardouin, Jean-Benoit; Venisse, Jean-Luc; Challet-Bouju, Gaelle
2015-05-30
Online gambling has been legalized in France in 2010. Licenses are issued to gambling operators who demonstrate their ability to respect the legal framework (security, taxation, consumer protection, etc.). The preventive measures to protect vulnerable gamblers include an obligation to provide online gambling moderators. These moderators should allow gamblers to limit their bets, exclude themselves from the website for 7 days, and consult the balance of the gambler's account at any time. However, there are only a few published reports of empirical research investigating the effectiveness of Internet-based protective measures implemented by French law. Moreover, no empirical research has yet studied the impact of bonuses on gambling behaviors. This research is an experimental randomized controlled trial, risk prevention targeted. The research is divided into four sub-studies depending on the studied moderator: limiting bonuses, self-exclusion, self-limitation and information. The study sample consists of 485 volunteers. For each experimental condition and the control groups, the sample is composed of gamblers equally recruited from gamblers having preferences in each of the three major forms of games (lottery and scratch tickets, sports and horserace betting, and poker). For each form of gambling, the gamblers are recruited in order to obtain as many problem gamblers as non-problem gamblers. According to the randomization, the experimental session begins. The experimental session is a gambling situation on a computer in our research center. The gambler is invited to play on his favorite gambling site as usual, with his own gambler account and his own money. Data collected comprise sociodemographic characteristics, gambling habits, an interview about enjoyment and feeling out of control during the gambling session, moderator impact on gambling practice, statement of gambling parameters and questionnaires (BMIS, GRCS, CPGI, GACS). Moderator efficiency is assessed based
Impact of online patient reminders to improve asthma care: A randomized controlled trial
Pool, Andrew C.; Kraschnewski, Jennifer L.; Poger, Jennifer M.; Smyth, Joshua; Stuckey, Heather L.; Craig, Timothy J.; Lehman, Erik B.; Yang, Chengwu; Sciamanna, Christopher N.
2017-01-01
Importance Asthma is one of the most burdensome chronic illnesses in the US. Despite widespread dissemination of evidence-based guidelines, more than half of the adults with asthma have uncontrolled symptoms. Objective To examine the efficacy of an online tool designed to improve asthma control. Design 12-month single blind randomized controlled trial of the online tool (Intervention condition, IC) versus an active control tool (CC). Setting Patients enrolled in an insurance plan. Participants Participants were 408 adults (21–60 years of age) with persistent asthma. Intervention At least once each month and before provider visits, participants in the IC answered questions online about their asthma symptoms, asthma medications and asthma care received from providers, such as an asthma management plan. The tool then provided tailored feedback to remind patients 1) to ask health care providers specific questions that may improve asthma control (e.g., additional controller medications) and 2) to consistently perform specific self-care behaviors (e.g., proper inhaler technique). Participants in the CC received similar questions and feedback, yet focused instead on preventive services unrelated to asthma control (e.g., cancer screening). Main outcome measures The main outcome measure was asthma control, as assessed by the 5-question Asthma Control Test (ACT). Secondary outcomes included quality of life, medication use and healthcare utilization (e.g., emergency department visits). Results After 12 months, 323 participants completed follow-up measures (79.2%). Participants in the IC reported a greater mean improvement in the ACT score than participants in the CC (2.3 vs. 1.2; p = 0.02) and 9 of 11 individual asthma control survey items showed non-significant improvements favoring the IC. No differences were observed in medication adherence, number of asthma controller medications or health care utilization. Conclusion and relevance Simple and brief online patient
可折旧设备在线租赁的随机性竞争策略%Randomized competitive strategy for online leasing of depreciable equipment
张永; 张卫国; 徐维军
2011-01-01
It is a hot topic to use the online algorithm and competitive analysis to study the online leasing problem. Based on the study of online leasing of general equipment, the online leasing of depreciable equipment is discussed. Since randomized algorithms can boost up performance, the randomized strategies for online leasing of depreciable equipment with oblivious adversary is proposed both with and without consideration of interest rate, respectively, and the maximum of its optimal competitive ratio is also obtained respectively. The conclusion shows that the introduction of depreciation factor made competitive performance improved. The introduction also makes the model more practical for large-scale investment in equipment problem, and provides investor better theoretical basis for decision making. In addition, consideration of interest rates makes the eompetitive ratio decrease a little, that is, investor will take more prudent investment strategy when interest rate is taken into consideration.%应用在线算法与竞争分析研究在线租赁问题是近年来国内外的一个研究热点.在一般设备在线租赁的基础上,提出了可折旧设备在线租赁问题.针对离线人具有遗忘性竞争对手的特点分别给出了可折旧设备在线租赁在有无利率情形下的随机性竞争策略.基于在线-离线成本比值矩阵分别证明了有无利率下随机性策略的竞争比,说明了折旧因素的引入使得可折旧设备随机性策略的竞争性能进一步得到改善并使得模型更适合于实际中大型设备投资问题,从而为投资者提供更好的理论决策依据.另外,市场利率的引入使得可折旧设备随机性策略的竞争性能有所降低但模型更符合现实情况,即大型设备投资者若考虑到资金的收益及市场风险因素后将会采取更加谨慎稳健的投资策略.
On efficient randomized algorithms for finding the PageRank vector
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
What Works Clearinghouse, 2014
2014-01-01
The 2013 study, "Interactive Learning Online at Public Universities: Evidence From a Six-Campus Randomized Trial," examined the impact of interactive learning online (ILO) on the pass rates of 605 students enrolled in introductory statistics courses at six public universities. ILO is a form of online course instruction in which…
Sustained effects of online genetics education: a randomized controlled trial on oncogenetics
Houwink, Elisa JF; van Teeffelen, Sarah R; Muijtjens, Arno MM; Henneman, Lidewij; Jacobi, Florijn; van Luijk, Scheltus J; Jan Dinant, Geert; van der Vleuten, Cees; Cornel, Martina C
2014-01-01
Medical professionals are increasingly expected to deliver genetic services in daily patient care. However, genetics education is considered to be suboptimal and in urgent need of revision and innovation. We designed a Genetics e-learning Continuing Professional Development (CPD) module aimed at improving general practitioners' (GPs') knowledge about oncogenetics, and we conducted a randomized controlled trial to evaluate the outcomes at the first two levels of the Kirkpatrick framework (satisfaction, learning and behavior). Between September 2011 and March 2012, a parallel-group, pre- and post-retention (6-month follow-up) controlled group intervention trial was conducted, with repeated measurements using validated questionnaires. Eighty Dutch GP volunteers were randomly assigned to the intervention or the control group. Satisfaction with the module was high, with the three item's scores in the range 4.1–4.3 (5-point scale) and a global score of 7.9 (10-point scale). Knowledge gains post test and at retention test were 0.055 (Pmethod to improve oncogenetics knowledge. The educational effects can inform further development of online genetics modules aimed at improving physicians' genetics knowledge and could potentially be relevant internationally and across a wider range of potential audiences. PMID:23942200
Roddy, McKenzie K; Nowlan, Kathryn M; Doss, Brian D
2016-11-17
The negative impacts of relationship distress on the couple, the family, and the individual are well-known. However, couples are often unable to access effective treatments to combat these effects-including many couples who might be at highest risk for relationship distress. Online self-help interventions decrease the barriers to treatment and provide couples with high quality, research-based programs they can do on their own. Using a combined multiple baseline and randomized design, the present study investigated the effectiveness of the Brief OurRelationship.com (Brief-OR) program with and without staff support in improving relationship distress and individual functioning. Results indicated the program produced significant gains in several areas of relationship functioning; however, these gains were smaller in magnitude than those observed in Full-OR. Furthermore, effects of Brief-OR were not sustained over follow-up. Comparisons between couples randomized to Brief-OR with and without contact with a staff coach indicated that coach contact significantly reduced program noncompletion and improved program effects. Limitations and future directions are discussed. © 2016 Family Process Institute.
Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun
2015-01-01
Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.
VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm
Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo
2015-01-01
Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.
Diffusion Limits of the Random Walk Metropolis Algorithm in High Dimensions
Mattingly, Jonathan C; Stuart, Andrew M
2010-01-01
Diffusion limits of MCMC methods in high dimensions provide a useful theoretical tool for studying computational complexity. In particular they lead directly to precise estimates of the number of steps required to explore the target measure, in stationarity, as a function of the dimension of the state space. However, to date such results have only been proved for target measures with a product structure, severely limiting their applicability. The purpose of this paper is to study diffusion limits for a class of naturally occuring high dimensional measures, found from the approximation of measures on a Hilbert space which are absolutely continuous with respect to a Gaussian reference measure. The diffusion limit of a random walk Metropolis algorithm to an infinite dimensional Hilbert space valued SDE (or SPDE) is proved, facilitating understanding of the computational complexity of the algorithm.
Sadeghkhani Iman
2012-01-01
Full Text Available This paper introduces an approach to analyse transient overvoltages during capacitor banks switching based on artificial neural networks (ANN. Three learning algorithms, delta-bar-delta (DBD, extended delta-bar-delta (EDBD and directed random search (DRS were used to train the ANNs. The ANN training is based on equivalent parameters of the network and therefore, a trained ANN is applicable to every studied system. The developed ANN is trained with extensive simulated results and tested for typical cases. The new algorithms are presented and demonstrated for a partial 39-bus New England test system. The simulated results show the proposed technique can accurately estimate the peak values of switching overvoltages.
Lu, Jianfeng
2016-01-01
The particle-particle random phase approximation (pp-RPA) has been shown to be capable of describing double, Rydberg, and charge transfer excitations, for which the conventional time-dependent density functional theory (TDDFT) might not be suitable. It is thus desirable to reduce the computational cost of pp-RPA so that it can be efficiently applied to larger molecules and even solids. This paper introduces an $O(N^3)$ algorithm, where $N$ is the number of orbitals, based on an interpolative separable density fitting technique and the Jacobi-Davidson eigensolver to calculate a few low-lying excitations in the pp-RPA framework. The size of the pp-RPA matrix can also be reduced by keeping only a small portion of orbitals with orbital energy close to the Fermi energy. This reduced system leads to a smaller prefactor of the cubic scaling algorithm, while keeping the accuracy for the low-lying excitation energies.
Optimisation of an Active Suspension Force Controllerusing Genetic Algorithm for Random Input
M.K. Hada
2007-09-01
Full Text Available A novel control scheme for the active suspension in a 4-DOFs half-car model is presented.A force cancellation control scheme is used to isolate the sprung and the unsprung masses.Skyhook damper and virtual damper concepts are employed to stabilise the sprung and unsprungmasses respectively. Road-following springs are applied for the sprung mass to follow the trendof the road surface condition and to maintain the suspension stroke within a reasonable range.For efficiency, genetic algorithm is employed to search for the parameters like damping ratio andspring constant to achieve an optimum trade off among ride comfort, handling quality, andsuspension stroke simultaneously for random input. Computer simulations are performed usingMATLAB software to verify the proposed control scheme and effectiveness of the appliedgenetic algorithm.
A Bio-Inspired Robust Adaptive Random Search Algorithm for Distributed Beamforming
Tseng, Chia-Shiang; Lin, Che
2010-01-01
A bio-inspired robust adaptive random search algorithm (BioRARSA), designed for distributed beamforming for sensor and relay networks, is proposed in this work. It has been shown via a systematic framework that BioRARSA converges in probability and its convergence time scales linearly with the number of distributed transmitters. More importantly, extensive simulation results demonstrate that the proposed BioRARSA outperforms existing adaptive distributed beamforming schemes by as large as 29.8% on average. This increase in performance results from the fact that BioRARSA can adaptively adjust its sampling stepsize via the "swim" behavior inspired by the bacterial foraging mechanism. Hence, the convergence time of BioRARSA is insensitive to the initial sampling stepsize of the algorithm, which makes it robust against the dynamic nature of distributed wireless networks.
Rajput, Sudheesh K.; Nishchal, Naveen K.
2017-04-01
We propose a novel security scheme based on the double random phase fractional domain encoding (DRPE) and modified Gerchberg-Saxton (G-S) phase retrieval algorithm for securing two images simultaneously. Any one of the images to be encrypted is converted into a phase-only image using modified G-S algorithm and this function is used as a key for encrypting another image. The original images are retrieved employing the concept of known-plaintext attack and following the DRPE decryption steps with all correct keys. The proposed scheme is also used for encryption of two color images with the help of convolution theorem and phase-truncated fractional Fourier transform. With some modification, the scheme is extended for simultaneous encryption of gray-scale and color images. As a proof-of-concept, simulation results have been presented for securing two gray-scale images, two color images, and simultaneous gray-scale and color images.
Managing Emergencies Optimally Using a Random Neural Network-Based Algorithm
Qing Han
2013-10-01
Full Text Available Emergency rescues require that first responders provide support to evacuate injured and other civilians who are obstructed by the hazards. In this case, the emergency personnel can take actions strategically in order to rescue people maximally, efficiently and quickly. The paper studies the effectiveness of a random neural network (RNN-based task assignment algorithm involving optimally matching emergency personnel and injured civilians, so that the emergency personnel can aid trapped people to move towards evacuation exits in real-time. The evaluations are run on a decision support evacuation system using the Distributed Building Evacuation Simulator (DBES multi-agent platform in various emergency scenarios. The simulation results indicate that the RNN-based task assignment algorithm provides a near-optimal solution to resource allocation problems, which avoids resource wastage and improves the efficiency of the emergency rescue process.
Hardy, Joseph L.; Nelson, Rolf A.; Thomason, Moriah E.; Sternberg, Daniel A.; Katovich, Kiefer; Farzin, Faraz; Scanlon, Michael
2015-01-01
Background A variety of studies have demonstrated gains in cognitive ability following cognitive training interventions. However, other studies have not shown such gains, and questions remain regarding the efficacy of specific cognitive training interventions. Cognitive training research often involves programs made up of just one or a few exercises, targeting limited and specific cognitive endpoints. In addition, cognitive training studies typically involve small samples that may be insufficient for reliable measurement of change. Other studies have utilized training periods that were too short to generate reliable gains in cognitive performance. Methods The present study evaluated an online cognitive training program comprised of 49 exercises targeting a variety of cognitive capacities. The cognitive training program was compared to an active control condition in which participants completed crossword puzzles. All participants were recruited, trained, and tested online (N = 4,715 fully evaluable participants). Participants in both groups were instructed to complete one approximately 15-minute session at least 5 days per week for 10 weeks. Results Participants randomly assigned to the treatment group improved significantly more on the primary outcome measure, an aggregate measure of neuropsychological performance, than did the active control group (Cohen’s d effect size = 0.255; 95% confidence interval = [0.198, 0.312]). Treatment participants showed greater improvements than controls on speed of processing, short-term memory, working memory, problem solving, and fluid reasoning assessments. Participants in the treatment group also showed greater improvements on self-reported measures of cognitive functioning, particularly on those items related to concentration compared to the control group (Cohen’s d = 0.249; 95% confidence interval = [0.191, 0.306]). Conclusion Taken together, these results indicate that a varied training program composed of a number of
Аndriy V. Sadchenko
2015-12-01
Full Text Available Digital television systems need to ensure that all digital signals processing operations are performed simultaneously and consistently. Frame synchronization dictated by the need to match phases of transmitter and receiver so that it would be possible to identify the start of a frame. As a frame synchronization signals are often used long length binary sequence with good aperiodic autocorrelation function. Aim: This work is dedicated to the development of the algorithm of random length sequences synthesis. Materials and Methods: The paper provides a comparative analysis of the known sequences, which can be used at present as synchronization ones, revealed their advantages and disadvantages. This work proposes the algorithm for the synthesis of binary synchronization sequences of random length with good autocorrelation properties based on noise generator with a uniform distribution law of probabilities. A "white noise" semiconductor generator is proposed to use as the initial material for the synthesis of binary sequences with desired properties. Results: The statistical analysis of the initial implementations of the "white noise" and synthesized sequences for frame synchronization of digital television is conducted. The comparative analysis of the synthesized sequences with known ones was carried out. The results show the benefits of obtained sequences in compare with known ones. The performed simulations confirm the obtained results. Conclusions: Thus, the search algorithm of binary synchronization sequences with desired autocorrelation properties received. According to this algorithm, the sequence can be longer in length and without length limitations. The received sync sequence can be used for frame synchronization in modern digital communication systems that will increase their efficiency and noise immunity.
K. Lenin
2013-03-01
Full Text Available Reactive Power Optimization is a complex combinatorial optimization problem involving non-linear function having multiple local minima, non-linear and discontinuous constrains. This paper presents Attractive and repulsive Particle Swarm Optimization (ARPSO and Random Virus Algorithm (RVA in trying to overcome the Problem of premature convergence. RVA and ARPSO is applied to Reactive Power Optimization problem and is evaluated on standard IEEE 30Bus System. The results show that RVA prevents premature convergence to high degree but still keeps a rapid convergence. It gives best solution when compared to Attractive and repulsive Particle Swarm Optimization (ARPSO and Particle Swarm Optimization (PSO.
A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio
Günther, Elisabeth; Megow, Nicole; Wiese, Andreas
2012-01-01
We propose a new approach to competitive analysis in online scheduling by introducing the novel concept of online approximation schemes. Such a scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines, and we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. We also general- ize our techniques to arbitrary monomial cost functions and apply them to the makespan objective. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations. We also contribute algorithmic means to compute the actual value of the best possi- ble competitive ratio up to an arbitrary accuracy. This strongly contrasts all previous manually obta...
A novel chaotic block image encryption algorithm based on dynamic random growth technique
Wang, Xingyuan; Liu, Lintao; Zhang, Yingqian
2015-03-01
This paper proposes a new block image encryption scheme based on hybrid chaotic maps and dynamic random growth technique. Since cat map is periodic and can be easily cracked by chosen plaintext attack, we use cat map in another securer way, which can completely eliminate the cyclical phenomenon and resist chosen plaintext attack. In the diffusion process, an intermediate parameter is calculated according to the image block. The intermediate parameter is used as the initial parameter of chaotic map to generate random data stream. In this way, the generated key streams are dependent on the plaintext image, which can resist the chosen plaintext attack. The experiment results prove that the proposed encryption algorithm is secure enough to be used in image transmission systems.
An Artificial Bee Colony Algorithm for the Job Shop Scheduling Problem with Random Processing Times
Rui Zhang
2011-09-01
Full Text Available Due to the influence of unpredictable random events, the processing time of each operation should be treated as random variables if we aim at a robust production schedule. However, compared with the extensive research on the deterministic model, the stochastic job shop scheduling problem (SJSSP has not received sufficient attention. In this paper, we propose an artificial bee colony (ABC algorithm for SJSSP with the objective of minimizing the maximum lateness (which is an index of service quality. First, we propose a performance estimate for preliminary screening of the candidate solutions. Then, the K-armed bandit model is utilized for reducing the computational burden in the exact evaluation (through Monte Carlo simulation process. Finally, the computational results on different-scale test problems validate the effectiveness and efficiency of the proposed approach.
Lancee, Jaap; Eisma, Maarten C; van Straten, Annemieke; Kamphuis, Jan H
2015-01-01
Several trials have demonstrated the efficacy of online cognitive behavioral therapy (CBT) for insomnia. However, few studies have examined putative mechanisms of change based on the cognitive model of insomnia. Identification of modifiable mechanisms by which the treatment works may guide efforts to further improve the efficacy of insomnia treatment. The current study therefore has two aims: (1) to replicate the finding that online CBT is effective for insomnia and (2) to test putative mechanism of change (i.e., safety behaviors and dysfunctional beliefs). Accordingly, we conducted a randomized controlled trial in which individuals with insomnia were randomized to either online CBT for insomnia (n = 36) or a waiting-list control group (n = 27). Baseline and posttest assessments included questionnaires assessing insomnia severity, safety behaviors, dysfunctional beliefs, anxiety and depression, and a sleep diary. Three- and six-month assessments were administered to the CBT group only. Results show moderate to large statistically significant effects of the online treatment compared to the waiting list on insomnia severity, sleep measures, sleep safety behaviors, and dysfunctional beliefs. Furthermore, dysfunctional beliefs and safety behaviors mediated the effects of treatment on insomnia severity and sleep efficiency. Together, these findings corroborate the efficacy of online CBT for insomnia, and suggest that these effects were produced by changing maladaptive beliefs, as well as safety behaviors. Treatment protocols for insomnia may specifically be enhanced by more focused attention on the comprehensive fading of sleep safety behaviors, for instance through behavioral experiments.
Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm.
Bossard, Jeremy A; Lin, Lan; Werner, Douglas H
2016-01-01
Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as 'chaotic', but we propose that apparent 'chaotic' natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too 'perfect' to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the 'chaotic' (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and 'chaotic' superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime.
Jingwen Zhang, PhD
2016-12-01
Full Text Available To identify what features of online social networks can increase physical activity, we conducted a 4-arm randomized controlled trial in 2014 in Philadelphia, PA. Students (n = 790, mean age = 25.2 at an university were randomly assigned to one of four conditions composed of either supportive or competitive relationships and either with individual or team incentives for attending exercise classes. The social comparison condition placed participants into 6-person competitive networks with individual incentives. The social support condition placed participants into 6-person teams with team incentives. The combined condition with both supportive and competitive relationships placed participants into 6-person teams, where participants could compare their team's performance to 5 other teams' performances. The control condition only allowed participants to attend classes with individual incentives. Rewards were based on the total number of classes attended by an individual, or the average number of classes attended by the members of a team. The outcome was the number of classes that participants attended. Data were analyzed using multilevel models in 2014. The mean attendance numbers per week were 35.7, 38.5, 20.3, and 16.8 in the social comparison, the combined, the control, and the social support conditions. Attendance numbers were 90% higher in the social comparison and the combined conditions (mean = 1.9, SE = 0.2 in contrast to the two conditions without comparison (mean = 1.0, SE = 0.2 (p = 0.003. Social comparison was more effective for increasing physical activity than social support and its effects did not depend on individual or team incentives.
Sustained effects of online genetics education: a randomized controlled trial on oncogenetics.
Houwink, Elisa J F; van Teeffelen, Sarah R; Muijtjens, Arno M M; Henneman, Lidewij; Jacobi, Florijn; van Luijk, Scheltus J; Dinant, Geert Jan; van der Vleuten, Cees; Cornel, Martina C
2014-03-01
Medical professionals are increasingly expected to deliver genetic services in daily patient care. However, genetics education is considered to be suboptimal and in urgent need of revision and innovation. We designed a Genetics e-learning Continuing Professional Development (CPD) module aimed at improving general practitioners' (GPs') knowledge about oncogenetics, and we conducted a randomized controlled trial to evaluate the outcomes at the first two levels of the Kirkpatrick framework (satisfaction, learning and behavior). Between September 2011 and March 2012, a parallel-group, pre- and post-retention (6-month follow-up) controlled group intervention trial was conducted, with repeated measurements using validated questionnaires. Eighty Dutch GP volunteers were randomly assigned to the intervention or the control group. Satisfaction with the module was high, with the three item's scores in the range 4.1-4.3 (5-point scale) and a global score of 7.9 (10-point scale). Knowledge gains post test and at retention test were 0.055 (P<0.05) and 0.079 (P<0.01), respectively, with moderate effect sizes (0.27 and 0.31, respectively). The participants appreciated applicability in daily practice of knowledge aspects (item scores 3.3-3.8, five-point scale), but scores on self-reported identification of disease, referral to a specialist and knowledge about the possibilities/limitations of genetic testing were near neutral (2.7-2.8, five-point scale). The Genetics e-learning CPD module proved to be a feasible, satisfactory and clinically applicable method to improve oncogenetics knowledge. The educational effects can inform further development of online genetics modules aimed at improving physicians' genetics knowledge and could potentially be relevant internationally and across a wider range of potential audiences.
Zhang, Jingwen; Brackbill, Devon; Yang, Sijia; Becker, Joshua; Herbert, Natalie; Centola, Damon
2016-12-01
To identify what features of online social networks can increase physical activity, we conducted a 4-arm randomized controlled trial in 2014 in Philadelphia, PA. Students (n = 790, mean age = 25.2) at an university were randomly assigned to one of four conditions composed of either supportive or competitive relationships and either with individual or team incentives for attending exercise classes. The social comparison condition placed participants into 6-person competitive networks with individual incentives. The social support condition placed participants into 6-person teams with team incentives. The combined condition with both supportive and competitive relationships placed participants into 6-person teams, where participants could compare their team's performance to 5 other teams' performances. The control condition only allowed participants to attend classes with individual incentives. Rewards were based on the total number of classes attended by an individual, or the average number of classes attended by the members of a team. The outcome was the number of classes that participants attended. Data were analyzed using multilevel models in 2014. The mean attendance numbers per week were 35.7, 38.5, 20.3, and 16.8 in the social comparison, the combined, the control, and the social support conditions. Attendance numbers were 90% higher in the social comparison and the combined conditions (mean = 1.9, SE = 0.2) in contrast to the two conditions without comparison (mean = 1.0, SE = 0.2) (p = 0.003). Social comparison was more effective for increasing physical activity than social support and its effects did not depend on individual or team incentives.
Zhou Feng
2013-09-01
Full Text Available A based on Rapidly-exploring Random Tree(RRT and Particle Swarm Optimizer (PSO for path planning of the robot is proposed.First the grid method is built to describe the working space of the mobile robot,then the Rapidly-exploring Random Tree algorithm is used to obtain the global navigation path,and the Particle Swarm Optimizer algorithm is adopted to get the better path.Computer experiment results demonstrate that this novel algorithm can plan an optimal path rapidly in a cluttered environment.The successful obstacle avoidance is achieved,and the model is robust and performs reliably.
Sami Ghnimi
2010-07-01
Full Text Available This paper investigates a Modified Uniform Triangular Array (MUTA to support online space-time MIMO-CDMA location based services with full azimuthal coverage via JADE-MUSIC algorithm. A new space-time lifting preprocessing (STLP scheme is introduced as a decorrelating process of coherent signals through the dense/NLOS multipath MIMO channel before applying the JADE-MUSIC estimator. Uniform- H-Array (UHA and Uniform-X-Array (UXA geometries are established for performance comparisons with the proposed MUTA. Computer simulations under environment Matlab are described to illustrate the performance of online joint angle/delay estimation with MUTA-MIMO base station applying JADE-MUSIC in conjunction with STLP scheme in 360 degrees azimuth region.
Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm
Kaczałek, B.; Borkowski, A.
2016-06-01
The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.
URBAN ROAD DETECTION IN AIRBONE LASER SCANNING POINT CLOUD USING RANDOM FOREST ALGORITHM
B. Kaczałek
2016-06-01
Full Text Available The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS. For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.
Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.
2013-05-01
In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.
Wang Wei
2016-01-01
Full Text Available The related theory and algorithm of adaptive inverse control were presented through the research which pointed out the adaptive inverse control strategy could effectively eliminate the noise influence on the system control. Proposed using a frequency domain filter-X LMS adaptive inverse control algorithm, and the control algorithm was applied to the two-exciter hydraulic vibration test system of random shock vibration control process and summarized the process of the adaptive inverse control strategies in the realization of the random shock vibration test. The self-closed-loop and field test show that using the frequency-domain filter-X LMS adaptive inverse control algorithm can realize high precision control of random shock vibration test.
Identifying and Analyzing Novel Epilepsy-Related Genes Using Random Walk with Restart Algorithm
Wei Guo
2017-01-01
Full Text Available As a pathological condition, epilepsy is caused by abnormal neuronal discharge in brain which will temporarily disrupt the cerebral functions. Epilepsy is a chronic disease which occurs in all ages and would seriously affect patients’ personal lives. Thus, it is highly required to develop effective medicines or instruments to treat the disease. Identifying epilepsy-related genes is essential in order to understand and treat the disease because the corresponding proteins encoded by the epilepsy-related genes are candidates of the potential drug targets. In this study, a pioneering computational workflow was proposed to predict novel epilepsy-related genes using the random walk with restart (RWR algorithm. As reported in the literature RWR algorithm often produces a number of false positive genes, and in this study a permutation test and functional association tests were implemented to filter the genes identified by RWR algorithm, which greatly reduce the number of suspected genes and result in only thirty-three novel epilepsy genes. Finally, these novel genes were analyzed based upon some recently published literatures. Our findings implicate that all novel genes were closely related to epilepsy. It is believed that the proposed workflow can also be applied to identify genes related to other diseases and deepen our understanding of the mechanisms of these diseases.
An efficient algorithm for the vertex-disjoint paths problem in random graphs
Broder, A.Z. [Digital Systems Research Center, Palo Alto, CA (United States); Frieze, A.M.; Suen, S. [Carnegie-Mellon Univ., Pittsburgh, PA (United States); Upfal, E. [IBM Almaden Research Center, San Jose, CA (United States)
1996-12-31
Given a graph G = (V, E) and a set of pairs of vertices in V, we are interested in finding for each pair (a{sub i}, b{sub i}) a path connecting a{sub i} to b{sub i}, such that the set of paths so found is vertex-disjoint. (The problem is NP-complete for general graphs as well as for planar graphs. It is in P if the number of pairs is fixed.) Our model is that the graph is chosen first, then an adversary chooses the pairs of endpoints, subject only to obvious feasibility constraints, namely, all pairs must be disjoint, no more than a constant fraction of the vertices could be required for the paths, and not {open_quotes}too many{close_quotes} neighbors of a vertex can be endpoints. We present a randomized polynomial time algorithm that works for almost all graphs; more precisely in the G{sub n,m} or G{sub n,p} models, the algorithm succeeds with high probability for all edge densities above the connectivity threshold. The set of pairs that can be accommodated is optimal up to constant factors. Although the analysis is intricate, the algorithm itself is quite simple and suggests a practical heuristic. We include two applications of the main result, one in the context of circuit switching communication, the other in the context of topological embeddings of graphs.
Identifying and Analyzing Novel Epilepsy-Related Genes Using Random Walk with Restart Algorithm
Guo, Wei; Shang, Dong-Mei; Cao, Jing-Hui; Feng, Kaiyan; Wang, ShaoPeng
2017-01-01
As a pathological condition, epilepsy is caused by abnormal neuronal discharge in brain which will temporarily disrupt the cerebral functions. Epilepsy is a chronic disease which occurs in all ages and would seriously affect patients' personal lives. Thus, it is highly required to develop effective medicines or instruments to treat the disease. Identifying epilepsy-related genes is essential in order to understand and treat the disease because the corresponding proteins encoded by the epilepsy-related genes are candidates of the potential drug targets. In this study, a pioneering computational workflow was proposed to predict novel epilepsy-related genes using the random walk with restart (RWR) algorithm. As reported in the literature RWR algorithm often produces a number of false positive genes, and in this study a permutation test and functional association tests were implemented to filter the genes identified by RWR algorithm, which greatly reduce the number of suspected genes and result in only thirty-three novel epilepsy genes. Finally, these novel genes were analyzed based upon some recently published literatures. Our findings implicate that all novel genes were closely related to epilepsy. It is believed that the proposed workflow can also be applied to identify genes related to other diseases and deepen our understanding of the mechanisms of these diseases.
A Point Cloud Alignment Algorithm Based on Stereo Vision Using Random Pattern Projection
Chen-Sheng Chen
2016-03-01
Full Text Available This paper proposes a point cloud alignment algorithm based on stereo vision using Random Pattern Projection (RPP. In the application of stereo vision, it is rather difficult to find correspondences between stereo images of texture-less objects. To overcome this issue, RPP is used to enhance the object’s features, thus increasing the accuracy of the identified correspondences of the stereo images. In the 3D alignment algorithm, the down sample technique is used to filter out the outliers of the point cloud data to improve system efficiency. Furthermore, the extracted features of the down sample point cloud data were applied in the matching process. Finally, the object’s pose was estimated by the alignment algorithm based on object features. In experiments, the maximum error and standard deviation of rotation are respectively about 0.031°and 0.199°, while the maximum error and standard deviation of translation are respectively about 0.565 mm and 0.902 mm . The execution time for pose estimation is about 230ms.
Zhang, Kui; Busov, Victor; Wei, Hairong
2017-01-01
Background Present knowledge indicates a multilayered hierarchical gene regulatory network (ML-hGRN) often operates above a biological pathway. Although the ML-hGRN is very important for understanding how a pathway is regulated, there is almost no computational algorithm for directly constructing ML-hGRNs. Results A backward elimination random forest (BWERF) algorithm was developed for constructing the ML-hGRN operating above a biological pathway. For each pathway gene, the BWERF used a random forest model to calculate the importance values of all transcription factors (TFs) to this pathway gene recursively with a portion (e.g. 1/10) of least important TFs being excluded in each round of modeling, during which, the importance values of all TFs to the pathway gene were updated and ranked until only one TF was remained in the list. The above procedure, termed BWERF. After that, the importance values of a TF to all pathway genes were aggregated and fitted to a Gaussian mixture model to determine the TF retention for the regulatory layer immediately above the pathway layer. The acquired TFs at the secondary layer were then set to be the new bottom layer to infer the next upper layer, and this process was repeated until a ML-hGRN with the expected layers was obtained. Conclusions BWERF improved the accuracy for constructing ML-hGRNs because it used backward elimination to exclude the noise genes, and aggregated the individual importance values for determining the TFs retention. We validated the BWERF by using it for constructing ML-hGRNs operating above mouse pluripotency maintenance pathway and Arabidopsis lignocellulosic pathway. Compared to GENIE3, BWERF showed an improvement in recognizing authentic TFs regulating a pathway. Compared to the bottom-up Gaussian graphical model algorithm we developed for constructing ML-hGRNs, the BWERF can construct ML-hGRNs with significantly reduced edges that enable biologists to choose the implicit edges for experimental
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.
Zhang, G; Torquato, S
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average
On-line generalized Steiner problem
Awerbuch, B.; Azar, Y.; Bartal, Y. [Tel Aviv Univ. (Israel)
1996-12-31
The Generalized Steiner Problem (GSP) is defined as follows. We are given a graph with non-negative weights and a set of pairs of vertices. The algorithm has to construct minimum weight subgraph such that the two nodes of each pair are connected by a path. We consider the on-line generalized Steiner problem, in which pairs of vertices arrive on-line and are needed to be connected immediately. We give a simple O(log{sup 2} n) competitive deterministic on-line algorithm. The previous best online algorithm (by Westbrook and Yan) was O({radical}n log n) competitive. We also consider the network connectivity leasing problem which is a generalization of the GSP. Here edges of the graph can be either bought or leased for different costs. We provide simple randomized O(log{sup 2} n) competitive algorithm based on the on-line generalized Steiner problem result.
Combined fuzzy logic and random walker algorithm for PET image tumor delineation.
Soufi, Motahare; Kamali-Asl, Alireza; Geramifar, Parham; Abdoli, Mehrsima; Rahmim, Arman
2016-02-01
The random walk (RW) technique serves as a powerful tool for PET tumor delineation, which typically involves significant noise and/or blurring. One challenging step is hard decision-making in pixel labeling. Fuzzy logic techniques have achieved increasing application in edge detection. We aimed to combine the advantages of fuzzy edge detection with the RW technique to improve PET tumor delineation. A fuzzy inference system was designed for tumor edge detection from RW probabilities. Three clinical PET/computed tomography datasets containing 12 liver, 13 lung, and 18 abdomen tumors were analyzed, with manual expert tumor contouring as ground truth. The standard RW and proposed combined method were compared quantitatively using the dice similarity coefficient, the Hausdorff distance, and the mean standard uptake value. The dice similarity coefficient of the proposed method versus standard RW showed significant mean improvements of 21.0±7.2, 12.3±5.8, and 18.4%±6.1% for liver, lung, and abdominal tumors, respectively, whereas the mean improvements in the Hausdorff distance were 3.6±1.4, 1.3±0.4, 1.8±0.8 mm, and the mean improvements in SUVmean error were 15.5±6.3, 11.7±8.6, and 14.1±6.8% (all P's<0.001). For all tumor sizes, the proposed method outperformed the RW algorithm. Furthermore, tumor edge analysis demonstrated further enhancement of the performance of the algorithm, relative to the RW method, with decreasing edge gradients. The proposed technique improves PET lesion delineation at different tumor sites. It depicts greater effectiveness in tumors with smaller size and/or low edge gradients, wherein most PET segmentation algorithms encounter serious challenges. Favorable execution time and accurate performance of the algorithm make it a great tool for clinical applications.
Asiri, Sharefa M.
2017-08-22
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Random weights, robust lattice rules and the geometry of the cbc$r$c algorithm
Dick, Josef
2011-01-01
In this paper we study lattice rules which are cubature formulae to approximate integrands over the unit cube $[0,1]^s$ from a weighted reproducing kernel Hilbert space. We assume that the weights are independent random variables with a given mean and variance for two reasons stemming from practical applications: (i) It is usually not known in practice how to choose the weights. Thus by assuming that the weights are random variables, we obtain robust constructions (with respect to the weights) of lattice rules. This, to some extend, removes the necessity to carefully choose the weights. (ii) In practice it is convenient to use the same lattice rule for many different integrands. The best choice of weights for each integrand may vary to some degree, hence considering the weights random variables does justice to how lattice rules are used in applications. We also study a generalized version which uses $r$ constraints which we call the cbc$r$c (component-by-component with $r$ constraints) algorithm. We show that...
Queue-based random-access algorithms: Fluid limits and stability issues
Javad Ghaderi
2014-09-01
Full Text Available We use fluid limits to explore the (instability properties of wireless networks with queue-based random-access algorithms. Queue-based random-access schemes are simple and inherently distributed in nature, yet provide the capability to match the optimal throughput performance of centralized scheduling mechanisms in a wide range of scenarios. Unfortunately, the type of activation rules for which throughput optimality has been established, may result in excessive queue lengths and delays. The use of more aggressive/persistent access schemes can improve the delay performance, but does not offer any universal maximum-stability guarantees. In order to gain qualitative insight and investigate the (instability properties of more aggressive/persistent activation rules, we examine fluid limits where the dynamics are scaled in space and time. In some situations, the fluid limits have smooth deterministic features and maximum stability is maintained, while in other scenarios they exihibit random oscillatory characteristics, giving rise to major technical challenges. In the latter regime, more aggressive access schemes continue to provide maximum stability in some networks, but may cause instability in others. In order to prove that, we focus on a particular network example and conduct a detailed analysis of the fluid limit process for the associated Markov chain. Specifically, we develop a novel approach based on stopping time sequences to deal with the switching probabilities governing the sample paths of the fluid limit process. Simulation experiments are conducted to illustrate and validate the analytical results.
Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
Khardon, R; Servedio, R A; 10.1613/jair.1655
2011-01-01
The paper studies machine learning problems where each example is described using a set of Boolean features and where hypotheses are represented by linear threshold elements. One method of increasing the expressiveness of learned hypotheses in this context is to expand the feature set to include conjunctions of basic features. This can be done explicitly or where possible by using a kernel function. Focusing on the well known Perceptron and Winnow algorithms, the paper demonstrates a tradeoff between the computational efficiency with which the algorithm can be run over the expanded feature space and the generalization ability of the corresponding learning algorithm. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over a feature space of exponentially many conjunctions; however we also show that using such kernels, the Perceptron algorithm can provably make an ex...
Prunuske, Amy J; Henn, Lisa; Brearley, Ann M; Prunuske, Jacob
Medical education increasingly involves online learning experiences to facilitate the standardization of curriculum across time and space. In class, delivering material by lecture is less effective at promoting student learning than engaging students in active learning experience and it is unclear whether this difference also exists online. We sought to evaluate medical student preferences for online lecture or online active learning formats and the impact of format on short- and long-term learning gains. Students participated online in either lecture or constructivist learning activities in a first year neurologic sciences course at a US medical school. In 2012, students selected which format to complete and in 2013, students were randomly assigned in a crossover fashion to the modules. In the first iteration, students strongly preferred the lecture modules and valued being told "what they need to know" rather than figuring it out independently. In the crossover iteration, learning gains and knowledge retention were found to be equivalent regardless of format, and students uniformly demonstrated a strong preference for the lecture format, which also on average took less time to complete. When given a choice for online modules, students prefer passive lecture rather than completing constructivist activities, and in the time-limited environment of medical school, this choice results in similar performance on multiple-choice examinations with less time invested. Instructors need to look more carefully at whether assessments and learning strategies are helping students to obtain self-directed learning skills and to consider strategies to help students learn to value active learning in an online environment.
Solving the wind farm layout optimization problem using random search algorithm
Feng, Ju; Shen, Wen Zhong
2015-01-01
is presented, which starts from an initial feasible layout and then improves the layout iteratively in the feasible solution space. It was first proposed in our previous study and improved in this study by adding some adaptive mechanisms. It can serve both as a refinement tool to improve an initial design......Wind farm (WF) layout optimization is to find the optimal positions of wind turbines (WTs) inside a WF, so as to maximize and/or minimize a single objective or multiple objectives, while satisfying certain constraints. In this work, a random search (RS) algorithm based on continuous formulation...... by expert guesses or other optimization methods, and as an optimization tool to find the optimal layout of WF with a certain number of WTs. A new strategy to evaluate layouts is also used, which can largely save the computation cost. This method is first applied to a widely studied ideal test problem...
M. Farshad
2013-09-01
Full Text Available This paper presents a novel method based on machine learning strategies for fault locating in high voltage direct current (HVDC transmission lines. In the proposed fault-location method, only post-fault voltage signals measured at one terminal are used for feature extraction. In this paper, due to high dimension of input feature vectors, two different estimators including the generalized regression neural network (GRNN and the random forest (RF algorithm are examined to find the relation between the features and the fault location. The results of evaluation using training and test patterns obtained by simulating various fault types in a long overhead transmission line with different fault locations, fault resistance and pre-fault current values have indicated the efficiency and the acceptable accuracy of the proposed approach.
The Statistical Mechanics of Random Set Packing and a Generalization of the Karp-Sipser Algorithm
C. Lucibello
2014-01-01
Full Text Available We analyse the asymptotic behaviour of random instances of the maximum set packing (MSP optimization problem, also known as maximum matching or maximum strong independent set on hypergraphs. We give an analytic prediction of the MSPs size using the 1RSB cavity method from statistical mechanics of disordered systems. We also propose a heuristic algorithm, a generalization of the celebrated Karp-Sipser one, which allows us to rigorously prove that the replica symmetric cavity method prediction is exact for certain problem ensembles and breaks down when a core survives the leaf removal process. The e-phenomena threshold discovered by Karp and Sipser, marking the onset of core emergence and of replica symmetry breaking, is elegantly generalized to Cs=e/(d-1 for one of the ensembles considered, where d is the size of the sets.
ITAC volume assessment through a Gaussian hidden Markov random field model-based algorithm.
Passera, Katia M; Potepan, Paolo; Brambilla, Luca; Mainardi, Luca T
2008-01-01
In this paper, a semi-automatic segmentation method for volume assessment of Intestinal-type adenocarcinoma (ITAC) is presented and validated. The method is based on a Gaussian hidden Markov random field (GHMRF) model that represents an advanced version of a finite Gaussian mixture (FGM) model as it encodes spatial information through the mutual influences of neighboring sites. To fit the GHMRF model an expectation maximization (EM) algorithm is used. We applied the method to a magnetic resonance data sets (each of them composed by T1-weighted, Contrast Enhanced T1-weighted and T2-weighted images) for a total of 49 tumor-contained slices. We tested GHMRF performances with respect to FGM by both a numerical and a clinical evaluation. Results show that the proposed method has a higher accuracy in quantifying lesion area than FGM and it can be applied in the evaluation of tumor response to therapy.
Security enhancement of double-random phase encryption by iterative algorithm
Qian, Sheng-Xia; Li, Yongnan; Kong, Ling-Jun; Li, Si-Min; Ren, Zhi-Cheng; Tu, Chenghou; Wang, Hui-Tian
2014-08-01
We propose an approach to enhance the security of optical encryption based on double-random phase encryption in a 4f system. The phase key in the input plane of the 4f system is generated by the Yang-Gu algorithm to control the phase of the encrypted information in the output plane of the 4f system, until the phase in the output plane converges to a predesigned distribution. Only the amplitude of the encrypted information must be recorded as a ciphertext. The information, which needs to be transmitted, is greatly reduced. We can decrypt the ciphertext with the aid of the predesigned phase distribution and the phase key in the Fourier plane. Our approach can resist various attacks.
An improved random walk algorithm for the implicit Monte Carlo method
Keady, Kendra P.; Cleveland, Mathew A.
2017-01-01
In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in "fully-gray" form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities are a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2-4 compared to standard RW, and a factor of ∼3-6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.
Baker, Sabine; Sanders, Matthew R; Turner, Karen M T; Morawska, Alina
2017-04-01
This randomized controlled trial examined the efficacy of Triple P Online Brief, a low-intensity online positive parenting program for parents of children with early onset disruptive behavior problems. Two hundred parents with 2-9-year-old children displaying early onset disruptive behavior difficulties were randomly assigned to either the intervention condition (n = 100) or a Waitlist Control group (n = 100). At 8-week post-assessment, parents in the intervention group displayed significantly less use of ineffective parenting strategies and significantly more confidence in dealing with a range of behavior concerns. These effects were maintained at 9-month follow-up assessment. A delayed effect was found for child behavior problems, with parents in the intervention group reporting significantly fewer and less frequent child behavior problems at follow-up, but not at post-assessment. All effect sizes were in the small to medium range. There were no significant improvements in observed negative parent and child behavior. No change was seen for parents' adjustment, anger, or conflict over parenting. Consumer satisfaction ratings for the program were high. A brief, low-intensity parenting program delivered via the Internet can bring about significant improvements in parenting and child behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Randomized Algorithms For High Quality Treatment Planning in Volumetric Modulated Arc Therapy
Yang, Yu; Wen, Zaiwen
2015-01-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the propo...
Online Algorithms for the Newsvendor Problem with and without Censored Demands
Sempolinski, Peter; Chaudhary, Amitabh
The newsvendor problem describes the dilemma of a newspaper salesman - how many papers should he purchase each day to resell, when he doesn't know the demand? We develop approaches for this well known problem in operations research, both for when the actual demand is known at the end of each day, and for when just the amount sold is known, i.e., the demand is censored. We present three results: (1) the first known algorithm with a bound on its worst-case performance for the censored demand newsvendor problem, (2) an algorithm with improved worst-case performance bounds for the regular newsvendor problem compared to previously known algorithms, and (3) more precise bounds on the performance of the two algorithms when they are seeded with an approximate "guess" on the optimal solution. In addition (4) we test the algorithms in a variety of simulated and real world conditions, and compare the results to those by previously known approaches. Our tests indicate that our algorithms perform comparably and often better than known approaches.
Muceli, Silvia; Jiang, Ning; Farina, Dario
2014-05-01
Previous research proposed the extraction of myoelectric control signals by linear factorization of multi-channel electromyogram (EMG) recordings from forearm muscles. This paper further analyses the theoretical basis for dimensionality reduction in high-density EMG signals from forearm muscles. Moreover, it shows that the factorization of muscular activation patterns in weights and activation signals by non-negative matrix factorization (NMF) is robust with respect to the channel configuration from where the EMG signals are obtained. High-density surface EMG signals were recorded from the forearm muscles of six individuals. Weights and activation signals extracted offline from 10 channel configurations with varying channel numbers (6, 8, 16, 192 channels) were highly similar. Additionally, the method proved to be robust against electrode shifts in both transversal and longitudinal direction with respect to the muscle fibers. In a second experiment, six subjects directly used the activation signals extracted from high-density EMG for online goal-directed control tasks involving simultaneous and proportional control of two degrees-of-freedom of the wrist. The synergy weights for this control task were extracted from a reference configuration and activation signals were calculated online from the reference configuration as well as from the two shifted configurations, simulating electrode shift. Despite the electrode shift, the task completion rate, task completion time, and execution efficiency were generally not statistically different among electrode configurations. Online performances were also mostly similar when using either 6, 8, or 16 EMG channels. The robustness of the method to the number and location of channels, proved both offline and online, indicates that EMG signals recorded from forearm muscles can be approximated as linear instantaneous mixtures of activation signals and justifies the use of linear factorization algorithms for extracting, in a
支持在线学习的增量式极端随机森林分类器%Incremental Learning Extremely Random Forest Classifier for Online Learning
王爱平; 万国伟; 程志全; 李思昆
2011-01-01
This paper proposes an incremental extremely random forest (IERF) algorithm, dealing with online learning classification with streaming data, especially with small streaming data. In this method, newly arrived examples are stored at the leaf nodes and used to determine when to split the leaf nodes combined with Gini index, so the trees can be expanded efficiently and fast with a few examples. The proposed online IERF algorithm gives more competitive or even better performance, than the offline extremely random forest (ERF) method, based on the UCI data experiment. On the moderate training datasets, the IERF algorithm beats the decision tree reconstruction algorithm and other incremental learning algorithms on the performance. Finally, the IERF algorithm is used to solve online video object tracking (multi-object tracking also included) problems, and the results on the challenging video sequences demonstrate its effectiveness and robustness.%提出了一种增量式极端随机森林分类器(incremental extremely random forest,简称IERF),用于处理数据流,特别是小样本数据流的在线学习问题.IERF算法中新到达的样本将被存储到相应的叶节点,并通过Gini系数来确定是否对当前叶节点进行分裂扩展,在给定有限数量,甚至是少量样本的情况下,IERF算法能够快速高效地完成分类器的增量构造.UCI数据集的实验证明,提出的IERF算法具有与离线批量学习的极端随机森林(extremely random forest,简称ERF)算法相当甚至更优的性能,在适度规模的样本集上,性能优于贪婪决策树重构算法和其他几种主要的增量学习算法.最后,提出的IERF算法被应用于解决视频在线跟踪(包含多目标跟踪)问题,基于多个真实视频数据的实验充分验证了算法的有效性和稳定性.
基于随机森林算法的B2B客户分级系统的设计%Design of B2B client classification system based on random forest algorithm
李军
2015-01-01
对分类数据挖掘算法进行研究，发现随机森林算法精度高、训练速度快、支持在线学习，因此提出在系统中使用该算法。针对随机森林算法抗噪声能力一般的问题，采用Bagging方法随机选择几组历史客户分级数据作为算法的训练数据，通过随机森林算法训练出分级模型，并通过这个模型对新客户数据进行自动分级。%The classification data mining algorithm is studied in this paper. The random forest algorithm has the advantages of high precision,fast training speed and supporting online learning,which is applied in classification system. Since random forest algorithm has general noise resisted ability,several groups classification data of history client are selected by using Bagging method randomly as the algorithm′s training data. The classification model is obtained by random forest algorithm training. New client data are classified automatically by using this model.
Harmonics elimination algorithm for operational modal analysis using random decrement technique
Modak, S. V.; Rawal, Chetan; Kundra, T. K.
2010-05-01
Operational modal analysis (OMA) extracts modal parameters of a structure using their output response, during operation in general. OMA, when applied to mechanical engineering structures is often faced with the problem of harmonics present in the output response, and can cause erroneous modal extraction. This paper demonstrates for the first time that the random decrement (RD) method can be efficiently employed to eliminate the harmonics from the randomdec signatures. Further, the research work shows effective elimination of large amplitude harmonics also by proposing inclusion of additional random excitation. This obviously need not be recorded for analysis, as is the case with any other OMA method. The free decays obtained from RD have been used for system modal identification using eigen-system realization algorithm (ERA). The proposed harmonic elimination method has an advantage over previous methods in that it does not require the harmonic frequencies to be known and can be used for multiple harmonics, including periodic signals. The theory behind harmonic elimination is first developed and validated. The effectiveness of the method is demonstrated through a simulated study and then by experimental studies on a beam and a more complex F-shape structure, which resembles in shape to the skeleton of a drilling or milling machine tool. Cases with presence of single and multiple harmonics in the response are considered.
Paul, Desbordes; Su, Ruan; Romain, Modzelewski; Sébastien, Vauclin; Pierre, Vera; Isabelle, Gardin
2016-12-28
The outcome prediction of patients can greatly help to personalize cancer treatment. A large amount of quantitative features (clinical exams, imaging, …) are potentially useful to assess the patient outcome. The challenge is to choose the most predictive subset of features. In this paper, we propose a new feature selection strategy called GARF (genetic algorithm based on random forest) extracted from positron emission tomography (PET) images and clinical data. The most relevant features, predictive of the therapeutic response or which are prognoses of the patient survival 3 years after the end of treatment, were selected using GARF on a cohort of 65 patients with a local advanced oesophageal cancer eligible for chemo-radiation therapy. The most relevant predictive results were obtained with a subset of 9 features leading to a random forest misclassification rate of 18±4% and an areas under the of receiver operating characteristic (ROC) curves (AUC) of 0.823±0.032. The most relevant prognostic results were obtained with 8 features leading to an error rate of 20±7% and an AUC of 0.750±0.108. Both predictive and prognostic results show better performances using GARF than using 4 other studied methods.
Kornmehl, Heather; Singh, Sanminder; Johnson, Mary Ann; Armstrong, April W
2017-09-01
Atopic dermatitis (AD) is a chronic disease requiring regular follow-up. To increase access to dermatological care, online management of AD is being studied. However, a critical knowledge gap exists in determining AD patients' quality of life in direct-to-patient online models. In this study, we examined quality of life in AD patients managed through a direct-access online model. We randomized 156 patients to receiving care through a direct-access online platform or in person. Patients were seen for six visits over 12 months. At each visit, the patients completed Dermatology Life Quality Index/Children's Dermatology Life Quality Index (DLQI/CDLQI), and Short Form (SF-12). Between baseline and 12 months, the mean (standard deviation, SD) within-group difference in DLQI score in the online group was 4.1 (±2.3); for the in-person group, the within-group difference was 4.8 (±2.7). The mean (SD) within-group difference in CDLQI score in the online group was 4.7 (±2.8); for the in-person group, the within-group difference was 4.9 (±3.1). The mean (SD) within-group difference in physical component score (PCS) and mental component score (MCS) SF-12 scores in the online group was 6.5 (±3.8) and 8.6 (±4.3); for the in-person group, it was 6.8 (±3.2) and 9.1(±3.8), respectively. The difference in the change in DLQI, CDLQI, SF-12 PCS, and SF-12 MCS scores between the two groups was 0.72 (95% confidence interval [90% CI], -0.97 to 2.41), 0.23 (90% CI, -2.21 to 2.67), 0.34 (90% CI, -1.16 to 1.84), and 0.51 (90% CI, -1.11 to 2.13), respectively. All differences were contained within their equivalence margins. Adult and pediatric AD patients receiving direct-access online care had equivalent quality of life outcomes as those see in person. The direct-access online model has the potential to increase access to care for patients with chronic skin diseases.
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
A wavelet-based ECG delineation algorithm for 32-bit integer online processing
Chiari Lorenzo; Di Marco Luigi Y
2011-01-01
Abstract Background Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementati...
Algorithms for On-line Order Batching in an Order-Picking Warehouse
Sebastian Henn
2009-01-01
In manual order picking systems, order pickers walk or ride through a distribution warehouse in order to collect items required by (internal or external) customers. Order batching consists of combining these – indivisible – customer orders into picking orders. With respect to order batching, two problem types can be distinguished: In off-line (static) batching all customer orders are known in advance. In on-line (dynamic) batching customer orders become available dynamically over time. This r...
Casson, Alexander J; Rodriguez-Villegas, Esther
2008-01-01
This paper quantifies the performance difference between custom and generic hardware algorithm implementations, illustrating the challenges that are involved in Body Area Network signal processing implementations. The potential use of analogue signal processing to improve the power performance is also demonstrated.
A two-level on-line learning algorithm of Artificial Neural Network with forward connections
Stanislaw Placzek
2014-12-01
Full Text Available An Artificial Neural Network with cross-connection is one of the most popular network structures. The structure contains: an input layer, at least one hidden layer and an output layer. Analysing and describing an ANN structure, one usually finds that the first parameter is the number of ANN’s layers. A hierarchical structure is a default and accepted way of describing the network. Using this assumption, the network structure can be described from a different point of view. A set of concepts and models can be used to describe the complexity of ANN’s structure in addition to using a two-level learning algorithm. Implementing the hierarchical structure to the learning algorithm, an ANN structure is divided into sub-networks. Every sub-network is responsible for finding the optimal value of its weight coefficients using a local target function to minimise the learning error. The second coordination level of the learning algorithm is responsible for coordinating the local solutions and finding the minimum of the global target function. In the article a special emphasis is placed on the coordinator’s role in the learning algorithm and its target function. In each iteration the coordinator has to send coordination parameters into the first level of subnetworks. Using the input X and the teaching Z vectors, the local procedures are working and finding their weight coefficients. At the same step the feedback information is calculated and sent to the coordinator. The process is being repeated until the minimum of local target functions is achieved. As an example, a two-level learning algorithm is used to implement an ANN in the underwriting process for classifying the category of health in a life insurance company.
基于点的POMDPs在线值迭代算法%Point-Based Online Value Iteration Algorithm for POMDPs
仵博; 吴敏; 佘锦华
2013-01-01
部分可观察马尔可夫决策过程(partially observable Markov decision processes,简称POMDPs)是动态不确定环境下序贯决策的理想模型,但是现有离线算法陷入信念状态“维数灾”和“历史灾”问题,而现有在线算法无法同时满足低误差与高实时性的要求,造成理想的POMDPs模型无法在实际工程中得到应用.对此,提出一种基于点的POMDPs在线值迭代算法(point-based online value iteration,简称PBOVI).该算法在给定的可达信念状态点上进行更新操作,避免对整个信念状态空间单纯体进行求解,加速问题求解；采用分支界限裁剪方法对信念状态与或树进行在线裁剪;提出信念状态结点重用思想,重用上一时刻已求解出的信念状态点,避免重复计算.实验结果表明,该算法具有较低误差率、较快收敛性,满足系统实时性的要求.%Partially observable Markov decision processes (POMDPs) provide a rich framework for sequential decision-making in stochastic domains of uncertainty.However,solving POMDPs is typically computationally intractable because the belief states of POMDPs have two curses:Dimensionality and history,and online algorithms that can not simultaneously satisfy the requirement of low errors and high timeliness.In order to address these problems,this paper proposes a point-based online value iteration (PBOVI) algorithm for POMDPs.This algorithm for speeding up POMDPs solving involves performing value backup at specific reachable belief points,rather than over the entire a belief simplex.The paper exploits branch-and-bound pruning approach to prune the AND/OR tree of belief states online and proposes a novel idea to reuse the belief states that have been computed last time to avoid repeated computation.The experiment and simulation results show that the proposed algorithm has its effectiveness in reducing the cost of computing policies and retaining the quality of the policies,so it can meet the
Kerfoot, B Price; Gagnon, David R; McMahon, Graham T; Orlander, Jay D; Kurgansky, Katherine E; Conlin, Paul R
2017-09-01
Rigorous evidence is lacking whether online games can improve patients' longer-term health outcomes. We investigated whether an online team-based game delivering diabetes self-management education (DSME) to patients via e-mail or mobile application (app) can generate longer-term improvements in hemoglobin A1c (HbA1c). Patients (n = 456) on oral diabetes medications with HbA1c ≥58 mmol/mol were randomly assigned between a DSME game (with a civics booklet) and a civics game (with a DSME booklet). The 6-month games sent two questions twice weekly via e-mail or mobile app. Participants accrued points based on performance, with scores posted on leaderboards. Winning teams and individuals received modest financial rewards. Our primary outcome measure was HbA1c change over 12 months. DSME game patients had significantly greater HbA1c reductions over 12 months than civics game patients (-8 mmol/mol [95% CI -10 to -7] and -5 mmol/mol [95% CI -7 to -3], respectively; P = 0.048). HbA1c reductions were greater among patients with baseline HbA1c >75 mmol/mol: -16 mmol/mol [95% CI -21 to -12] and -9 mmol/mol [95% CI -14 to -5] for DSME and civics game patients, respectively; P = 0.031. Patients with diabetes who were randomized to an online game delivering DSME demonstrated sustained and meaningful HbA1c improvements. Among patients with poorly controlled diabetes, the DSME game reduced HbA1c by a magnitude comparable to starting a new diabetes medication. Online games may be a scalable approach to improve outcomes among geographically dispersed patients with diabetes and other chronic diseases. © 2017 by the American Diabetes Association.
Kerfoot, B Price; Turchin, Alexander; Breydo, Eugene; Gagnon, David; Conlin, Paul R
2014-05-01
Many patients with high blood pressure (BP) do not have antihypertensive medications appropriately intensified at clinician visits. We investigated whether an online spaced-education (SE) game among primary care clinicians can decrease time to BP target among their hypertensive patients. A 2-arm randomized trial was conducted over 52 weeks among primary care clinicians at 8 hospitals. Educational content consisted of 32 validated multiple-choice questions with explanations on hypertension management. Providers were randomized into 2 groups: SE clinicians were enrolled in the game, whereas control clinicians received identical educational content in an online posting. SE game clinicians were e-mailed 1 question every 3 days. Adaptive game mechanics resent questions in 12 or 24 days if answered incorrectly or correctly, respectively. Clinicians retired questions by answering each correctly twice consecutively. Posting of relative performance among peers fostered competition. Primary outcome measure was time to BP target (game was completed by 87% of clinicians (48/55), whereas 84% of control clinicians (47/56) read the online posting. In multivariable analysis of 17 866 hypertensive periods among 14 336 patients, the hazard ratio for time to BP target in the SE game cohort was 1.043 (95% confidence interval, 1.007-1.081; P=0.018). The number of hypertensive episodes needed to treat to normalize one additional patient's BP was 67.8. The number of clinicians needed to teach to achieve this was 0.43. An online SE game among clinicians generated a modest but significant reduction in the time to BP target among their hypertensive patients. http://www.clinicaltrials.gov. Unique identifier: NCT00904007. © 2014 American Heart Association, Inc.
Self-adaptive PID controller of microwave drying rotary device tuning on-line by genetic algorithms
杨彪; 梁贵安; 彭金辉; 郭胜惠; 李玮; 张世敏; 李英伟; 白松
2013-01-01
The control design, based on self-adaptive PID with genetic algorithms(GA) tuning on-line was investigated, for the temperature control of industrial microwave drying rotary device with the multi-layer(IMDRDWM) and with multivariable nonlinear interaction of microwave and materials. The conventional PID control strategy incorporated with optimization GA was put forward to maintain the optimum drying temperature in order to keep the moisture content below 1%, whose adaptation ability included the cost function of optimization GA according to the output change. Simulations on five different industrial process models and practical temperature process control system for selenium-enriched slag drying intensively by using IMDRDWM were carried out systematically, indicating the reliability and effectiveness of control design. The parameters of proposed control design are all on-line implemented without iterative predictive calculations, and the closed-loop system stability is guaranteed, which makes the developed scheme simpler in its synthesis and application, providing the practical guidelines for the control implementation and the parameter design.
Online maintenance policy for a deteriorating system with random change of mode
Saassouh, B. [Laboratoire de Modelisation et Surete des Systemes, Institut Charles Delaunay-FRE CNRS 2848, Universite de Technologie de Troyes, 12, rue Marie Curie-BP 2060, 10010 Troyes Cedex (France); Dieulle, L. [Laboratoire de Modelisation et Surete des Systemes, Institut Charles Delaunay-FRE CNRS 2848, Universite de Technologie de Troyes, 12, rue Marie Curie-BP 2060, 10010 Troyes Cedex (France); Grall, A. [Laboratoire de Modelisation et Surete des Systemes, Institut Charles Delaunay-FRE CNRS 2848, Universite de Technologie de Troyes, 12, rue Marie Curie-BP 2060, 10010 Troyes Cedex (France)]. E-mail: antoine.grall@utt.fr
2007-12-15
Most of maintenance policies proposed in the literature for gradually deteriorating systems, consider a stationary deterioration process. This paper is an attempt to take into account stochastically deteriorating systems which are subject to a sudden change in their degradation process. A technical device subject to gradual degradation is considered. It is assumed that the level of degradation can be resumed by a single scalar variable. An online maintenance decision rule is proposed, which makes it possible to take into account in real time the online information available on the operating mode of the system as well as its actual deterioration level. We show the efficiency of considering online decision rules for maintenance with respect to traditional maintenance policies based on a static alarm threshold. Numerical simulations are given, to assess and optimize the performance of the maintained system from its asymptotic unavailability point of view. It is compared to the results obtained with classical control-limit maintenance policies.
Provost, Floriane; Hibert, Clément; Malet, Jean-Philippe; Stumpf, André; Doubre, Cécile
2016-04-01
Different studies have shown the presence of microseismic activity in soft-rock landslides. The seismic signals exhibit significantly different features in the time and frequency domains which allow their classification and interpretation. Most of the classes could be associated with different mechanisms of deformation occurring within and at the surface (e.g. rockfall, slide-quake, fissure opening, fluid circulation). However, some signals remain not fully understood and some classes contain few examples that prevent any interpretation. To move toward a more complete interpretation of the links between the dynamics of soft-rock landslides and the physical processes controlling their behaviour, a complete catalog of the endogeneous seismicity is needed. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forest classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied to the seismicity recorded at the Super-Sauze landslide between 2013 and 2015. We selected a dozen of seismic signal features that characterize precisely its spectral content (e.g. central frequency, spectrum width, energy in several frequency bands, spectrogram shape, spectrum local and global maxima
Land cover classification using random forest with genetic algorithm-based parameter optimization
Ming, Dongping; Zhou, Tianning; Wang, Min; Tan, Tian
2016-07-01
Land cover classification based on remote sensing imagery is an important means to monitor, evaluate, and manage land resources. However, it requires robust classification methods that allow accurate mapping of complex land cover categories. Random forest (RF) is a powerful machine-learning classifier that can be used in land remote sensing. However, two important parameters of RF classification, namely, the number of trees and the number of variables tried at each split, affect classification accuracy. Thus, optimal parameter selection is an inevitable problem in RF-based image classification. This study uses the genetic algorithm (GA) to optimize the two parameters of RF to produce optimal land cover classification accuracy. HJ-1B CCD2 image data are used to classify six different land cover categories in Changping, Beijing, China. Experimental results show that GA-RF can avoid arbitrariness in the selection of parameters. The experiments also compare land cover classification results by using GA-RF method, traditional RF method (with default parameters), and support vector machine method. When the GA-RF method is used, classification accuracies, respectively, improved by 1.02% and 6.64%. The comparison results show that GA-RF is a feasible solution for land cover classification without compromising accuracy or incurring excessive time.
Waiguny, Martin KJ
2015-01-01
Background Physician-rating websites combine public reporting with social networking and offer an attractive means by which users can provide feedback on their physician and obtain information about other patients’ satisfaction and experiences. However, research on how users evaluate information on these portals is still scarce and only little knowledge is available about the potential influence of physician reviews on a patient’s choice. Objective Starting from the perspective of prospective patients, this paper sets out to explore how certain characteristics of physician reviews affect the evaluation of the review and users’ attitudes toward the rated physician. We propose a model that relates review style and review number to constructs of review acceptance and check it with a Web-based experiment. Methods We employed a randomized 2x2 between-subject, factorial experiment manipulating the style of a physician review (factual vs emotional) and the number of reviews for a certain physician (low vs high) to test our hypotheses. A total of 168 participants were presented with a Web-based questionnaire containing a short description of a dentist search scenario and the manipulated reviews for a fictitious dental physician. To investigate the proposed hypotheses, we carried out moderated regression analyses and a moderated mediation analysis using the PROCESS macro 2.11 for SPSS version 22. Results Our analyses indicated that a higher number of reviews resulted in a more positive attitude toward the rated physician. The results of the regression model for attitude toward the physician suggest a positive main effect of the number of reviews (mean [low] 3.73, standard error [SE] 0.13, mean [high] 4.15, SE 0.13). We also observed an interaction effect with the style of the review—if the physician received only a few reviews, fact-oriented reviews (mean 4.09, SE 0.19) induced a more favorable attitude toward the physician compared to emotional reviews (mean 3
Hui, Alison; Wong, Paul Wai-Ching; Fu, King-Wa
2015-01-01
A depression-awareness campaign delivered through the Internet has been recommended as a public health approach that would enhance mental health literacy and encourage help-seeking attitudes. However, the outcomes of such a campaign remain understudied. The main aim of this study was to evaluate the effectiveness of an online depression awareness campaign, which was informed by the theory of planned behavior, to encourage help-seeking attitudes for depression and to enhance mental health literacy in Hong Kong. The second aim was to examine click-through behaviors by varying the affective facial expressions of people in the Facebook advertisements. Potential participants were recruited through Facebook advertisements, using either a happy or sad face illustration. Volunteer participants registered for the study by clicking on the advertisement and were invited to leave their personal email addresses to receive educational content about depression. The participants were randomly assigned into two groups (campaign or control), and over a four consecutive week period, received either the campaign material or official information developed by the Hospital Authority in Hong Kong. Pretests and posttests were conducted before and after the campaign to measure the differences in help-seeking attitudes and mental health literacy among the campaign and control groups. Of the 199 participants that registered and completed the pretest, 116 (55 campaign and 62 control) completed the campaign and the posttest. At the posttest, we found no significant changes in help-seeking attitudes between the campaign and control groups, but the campaign group participants demonstrated a statistically significant improvement in mental health literacy (P=.031) and a higher willingness to access additional information (PFacebook advertisement attracted more click-throughs by users into the website than did the sad face advertisement (P=.03). The present study provides evidence that an online
Online cognitive-behavioral treatment of bulimic symptoms: a randomized controlled trial
Ruwaard, J.; Lange, A.; Broeksteeg, J.; Renteria Agirre, A.; Schrieken, B.; Dolan, C.V.; Emmelkamp, P.
2013-01-01
Background: Manualized cognitive-behavioural treatment (CBT) is underutilized in the treatment of bulimic symptoms. Internet-delivered treatment may reduce current barriers. Objective: This study aimed to assess the efficacy of a new online CBT of bulimic symptoms. Method: Participants with bulimic
Utilization of genetic algorithm in on-line tuning of fluid power servos
Halme, J.
1997-12-31
This study describes a robust and plausible method based on genetic algorithms suitable for tuning a regulator. The main advantages of the method presented is its robustness and easy-to-use feature. In this thesis the method is demonstrated by searching for appropriate control parameters of a state-feedback controller in a fluid power environment. To corroborate the robustness of the tuning method, two earlier studies are also presented in the appendix, where the presented tuning method is used in different kinds of regulator tuning situations. (orig.) 33 refs.
van Leeuwen, Y; Rombouts, E K; Kruithof, C J; van der Meer, F J M; Rosendaal, F R
2007-01-01
BACKGROUND: Efforts to improve dosing quality in oral anticoagulant control include the use of computer algorithms. As current algorithms are simplistic and give dosage proposals in a small fraction of patients, we developed an algorithm based on principles of system and control engineering that giv
Using rapidly-exploring random tree-based algorithms to find smooth and optimal trajectories
Matebese, B
2012-10-01
Full Text Available feasible solution faster than other algorithms. The drawback of using RRT is that, as the number of samples increases, the probability that the algorithm converges to a sub-optimal solution increases. Furthermore, the path generated by this algorithm...
Maciej Goćwin
2008-01-01
Full Text Available The complexity of initial-value problems is well studied for systems of equations of first order. In this paper, we study the \\(\\varepsilon\\-complexity for initial-value problems for scalar equations of higher order. We consider two models of computation, the randomized model and the quantum model. We construct almost optimal algorithms adjusted to scalar equations of higher order, without passing to systems of first order equations. The analysis of these algorithms allows us to establish upper complexity bounds. We also show (almost matching lower complexity bounds. The \\(\\varepsilon\\-complexity in the randomized and quantum setting depends on the regularity of the right-hand side function, but is independent of the order of equation. Comparing the obtained bounds with results known in the deterministic case, we see that randomized algorithms give us a speed-up by \\(1/2\\, and quantum algorithms by \\(1\\ in the exponent. Hence, the speed-up does not depend on the order of equation, and is the same as for the systems of equations of first order. We also include results of some numerical experiments which confirm theoretical results.
AN ADVANCED DYNAMIC FEEDBACK AND RANDOM DISPATCHING LOAD-BALANCING ALGORITHM FOR GMLC IN 3G
Liao Jianxin; Zhang Hao; Zhu Xiaomin
2006-01-01
Based on the system architecture and software structure of GMLC (Gateway Mobile Location Center) in 3G (third generation), a new dynamic load-balancing algorithm is proposed. It bases on dynamic feedback and imports the increment for admitting new request into the load forecast. It dynamically adjusts the dispatching probability according to the remainder process capability of each node. Experiments on the performance of algorithm have been carried out in GMLC and the algorithm is compared with Pick-KX algorithm and DFB (Dynamic FeedBack) algorithm in average throughput and average response time. Experiments results show that the average throughput of the proposed algorithm is about five percents higher than that of the other two algorithms and the average response time is four percents higher under high system loading condition.
Altarelli, Fabrizio; Zamponi, Francesco
2007-01-01
We study the performances of stochastic heuristic search algorithms on Uniquely Extendible Constraint Satisfaction Problems with random inputs. We show that, for any heuristic preserving the Poissonian nature of the underlying instance, the (heuristic-dependent) largest ratio $\\alpha_a$ of constraints per variables for which a search algorithm is likely to find solutions is smaller than the critical ratio $\\alpha_d$ above which solutions are clustered and highly correlated. In addition we show that the clustering ratio can be reached when the number k of variables per constraints goes to infinity by the so-called Generalized Unit Clause heuristic.
Pan, Indranil; Das, Saptarshi; Gupta, Amitava
2011-01-01
An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS). The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz. the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared. The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers.
AdBagging:自适应抽样参数在线装袋算法%AdBagging: adaptive sampling parameters online bagging algorithm
李小斌; 李世银
2011-01-01
By analyzing the concept drift problem in data stream classification, a new algorithm based on online bagging algorithm named adaptive lambda bagging algorithm (AdBagging) is introduced. The new algorithm dynamically adjusts the sampling parameters of Poisson distribution in online bagging algorithm based on the number of misclassified samples in data stream classification. Through this procedure the new algorithm could give more attention to the misclassified samples and give little attention to the right classified samples. At the same time the learning weight of samples for algorithm can be adjusted according to the temporal order. So the algorithm could solve the concept drift problem in data stream classification. Experiments on synthesize and real data sets prove that the algorithm is effective.%通过对数据流分类中的概念漂移问题的研究,提出了一种在线装袋(Online Bagging)算法的改进算法——自适应抽样参数的在线装袋算法AdBagging(adaptive lambda bagging).利用在分类过程中出现的误分样本数量来调整Online Bagging算法中的泊松(Poisson)分布的抽样参数,从而可以动态调整新样本在学习器中的权重,即对于数据流中的误分类样本给予较高的学习权重因子,而对于正确分类的样本给予较低的学习权重因子,同时结合样本出现的时间顺序调整权重因子,使得集成分类器可以动态调整其多样性(adversity).该算法具有Online Bagging算法的高效简洁优点,并能解决数据流中具有概念漂移的问题,人工数据集和实际数据集上的实验结果表明了该算法的有效性.
Predicting Solar Flares Using SDO/HMI Vector Magnetic Data Product and Random Forest Algorithm
Liu, Chang; Deng, Na; Wang, Jason; Wang, Haimin
2017-08-01
Adverse space weather effects can often be traced to solar flares, prediction of which has drawn significant research interests. Many previous forecasting studies used physical parameters derived from photospheric line-of-sight field or ground-based vector field observations. The Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory produces full-disk vector magnetograms with continuous high-cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares occurred from 2010 May to 2016 December, and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude of flares they generated. We then retrieve SHARP related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hours, evaluate the classifier performance using the 10-fold cross validation scheme, and characterize the results using standard performace metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. We also find that the total unsigned quantities of vertical current, current helicity, and flux near polarity inversion line are among the most important parameters for classifying flaring regions into different classes.
Predicting Solar Flares Using SDO/HMI Vector Magnetic Data Products and the Random Forest Algorithm
Liu, Chang; Deng, Na; Wang, Jason T. L.; Wang, Haimin
2017-07-01
Adverse space-weather effects can often be traced to solar flares, the prediction of which has drawn significant research interests. The Helioseismic and Magnetic Imager (HMI) produces full-disk vector magnetograms with continuous high cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares that occurred from 2010 May to 2016 December and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude of flares they generated. We then retrieve SHARP-related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hr, evaluate the classifier performance using the 10-fold cross-validation scheme, and characterize the results using standard performance metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. To our knowledge, this is the first time that RF has been used to make multiclass predictions of solar flares. We also find that the total unsigned quantities of vertical current, current helicity, and flux near the polarity inversion line are among the most important parameters for classifying flaring regions into different classes.
Marcos Cerqueira Lima
2014-09-01
Full Text Available This paper evaluates how the advices of experienced entrepreneurs to young start-up creators in an online community reflect entrepreneurship traits commonly found in conceptual typologies. The overall goal is to contrast and evaluate existing models based on evidence from an online community. This should facilitate future studies to improve current typologies by ranking entrepreneurial traits according to perceived relevance. In order to achieve these objectives, we have conducted a “netnographic study” (i.e., the qualitative analysis of web-based content of 96 answers to the question “What is the best advice for a young, first-time startup CEO?” on Quora.com. Relying on Quora’s ranking algorithm (based on crowdsourcing of votes and community prestige, we focused on the top 50% of answers (which we shall call the “above Quora 50” category considered the most relevant by its 2000+ followers and 120,000+ viewers. We used Nvivo as a Qualitative Data Analysis Software to code all the entries into the literature categories. These codes were then later retrieved using matrix queries to compare the incidence of traits and the perceived relevance of answers. We found that among the 50% highest ranking answers on Quora, the following traits are perceived as the most important for young entrepreneurs to develop: management style, attitude in interpersonal relations, vision, self-concept, leadership style, marketing, market and customer knowledge, innovation, technical knowledge and skills, attitude to growth, ability to adapt, purpose and relations system. These results could lead to improving existing typologies and creating new models capable of better identifying people with the highest potential to succeed in new venture creation.
Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai
2017-07-01
Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
Bonfim Amaro Júnior
2017-01-01
Full Text Available The irregular strip packing problem (ISPP is a class of cutting and packing problem (C&P in which a set of items with arbitrary formats must be placed in a container with a variable length. The aim of this work is to minimize the area needed to accommodate the given demand. ISPP is present in various types of industries from manufacturers to exporters (e.g., shipbuilding, clothes, and glass. In this paper, we propose a parallel Biased Random-Key Genetic Algorithm (µ-BRKGA with multiple populations for the ISPP by applying a collision-free region (CFR concept as the positioning method, in order to obtain an efficient and fast layout solution. The layout problem for the proposed algorithm is represented by the placement order into the container and the corresponding orientation. In order to evaluate the proposed (µ-BRKGA algorithm, computational tests using benchmark problems were applied, analyzed, and compared with different approaches.
Carlos J. Corrada Bravo
2017-04-01
Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.
Online Clustering Algorithm Based on Grid Structure%基于网格结构的数据流在线快速聚类算法
毛国君; 王欣; 竹翠
2011-01-01
As the most existing stream clustering algorithms can not generate online clustering results in real-time,an online data stream clustering algorithm is proposed by using sliding windows and density-based grid storage structure.The algorithm achieves a rapid speed for online clustering data stream and it can provide users with real-time clustering results and reflect the dynamic evolution of data streams.Experimental results show that the algorithm proposed has a good capacity of dealing with rapid evolutional data stream and have a good clustering quality.%针对现有的数据流聚类算法不能在线实时生成用户需要的聚类结果问题,提出一种基于滑动窗口的数据流在线聚类算法.该算法采用密度网格存储结构,实现了数据流的在线聚类过程,能实时地向用户提供聚类结果,动态地检测数据流的进化情况.实验结果表明,该方法具有快速在线聚类能力,并能保证良好的聚类质量.
Borui Li
2014-04-01
Full Text Available Traditional object tracking technology usually regards the target as a point source object. However, this approximation is no longer appropriate for tracking extended objects such as large targets and closely spaced group objects. Bayesian extended object tracking (EOT using a random symmetrical positive definite (SPD matrix is a very effective method to jointly estimate the kinematic state and physical extension of the target. The key issue in the application of this random matrix-based EOT approach is to model the physical extension and measurement noise accurately. Model parameter adaptive approaches for both extension dynamic and measurement noise are proposed in this study based on the properties of the SPD matrix to improve the performance of extension estimation. An interacting multi-model algorithm based on model parameter adaptive filter using random matrix is also presented. Simulation results demonstrate the effectiveness of the proposed adaptive approaches and multi-model algorithm. The estimation performance of physical extension is better than the other algorithms, especially when the target maneuvers. The kinematic state estimation error is lower than the others as well.
Online Stochastic Ad Allocation: Efficiency and Fairness
Feldman, Jon; Korula, Nitish; Mirrokni, Vahab S; Stein, Cliff
2010-01-01
We study the efficiency and fairness of online stochastic display ad allocation algorithms from a theoretical and practical standpoint. In particular, we study the problem of maximizing efficiency in the presence of stochastic information. In this setting, each advertiser has a maximum demand for impressions of display ads that will arrive online. In our model, inspired by the concept of free disposal in economics, we assume that impressions that are given to an advertiser above her demand are given to her for free. Our main theoretical result is to present a training-based algorithm that achieves a (1-\\epsilon)-approximation guarantee in the random order stochastic model. In the corresponding online matching problem, we learn a dual variable for each advertiser, based on data obtained from a sample of impressions. We also discuss different fairness measures in online ad allocation, based on comparison to an ideal offline fair solution, and develop algorithms to compute "fair" allocations. We then discuss sev...
Chernyavskiy, A. Yu.
2013-01-01
The simple and universal global optimization method based on simplified multipopulation genetic algorithm is presented. The method is applied to quantum information problems. It is compared to the genetic algorithm on standard test functions, and also tested on the calculation of quantum discord and minimal entanglement entropy, which is an entanglement measure for pure multipartite states.
Subexponential lower bounds for randomized pivoting rules for the simplex algorithm
Friedmann, Oliver; Hansen, Thomas Dueholm; Zwick, Uri
2011-01-01
The simplex algorithm is among the most widely used algorithms for solving linear programs in practice. With essentially all deterministic pivoting rules it is known, however, to require an exponential number of steps to solve some linear programs. No non-polynomial lower bounds were known, prior...
Adaptive de-noising method based on wavelet and adaptive learning algorithm in on-line PD monitoring
王立欣; 诸定秋; 蔡惟铮
2002-01-01
It is an important step in the online monitoring of partial discharge (PD) to extract PD pulses from various background noises. An adaptive de-noising method is introduced for adaptive noise reduction during detection of PD pulses. This method is based on Wavelet Transform (WT) , and in the wavelet domain the noises decomposed at the levels are reduced by independent thresholds. Instead of the standard hard thresholding function, a new type of hard thresholding function with continuous derivative is employed by this method. For the selection of thresholds, an unsupervised learning algorithm based on gradient in a mean square error (MSE) is present to search for the optimal threshold for noise reduction, and the optimal threshold is selected when the minimum MSE is obtained. With the simulating signals and on-site experimental data processed by this method,it is shown that the background noises such as narrowband noises can be reduced efficiently. Furthermore, it is proved that in comparison with the conventional wavelet de-noising method the adaptive de-noising method has a better performance in keeping the pulses and is more adaptive when suppressing the background noises of PD signals.
Using Conditional Random Fields to Extract Contexts and Answers of Questions from Online Forums
Ding, Shilin; Cong, Gao; Lin, Chin-Yew;
2008-01-01
on Conditional Random Fields (CRFs) to detect the contexts and answers of questions from forum threads. We improve the basic framework by Skip-chain CRFs and 2D CRFs to better accommodate the features of forums for better performance. Experimental results show that our techniques are very promising....
Using Conditional Random Fields to Extract Contexts and Answers of Questions from Online Forums
Ding, Shilin; Cong, Gao; Lin, Chin-Yew
2008-01-01
on Conditional Random Fields (CRFs) to detect the contexts and answers of questions from forum threads. We improve the basic framework by Skip-chain CRFs and 2D CRFs to better accommodate the features of forums for better performance. Experimental results show that our techniques are very promising....
Wade, Shari L; Walz, Nicolay C; Carey, JoAnne; Williams, Kendra M; Cass, Jennifer; Herren, Luke; Mark, Erin; Yeates, Keith Owen
2010-01-01
To examine the efficacy of teen online problem solving (TOPS) in improving executive function (EF) deficits following traumatic brain injury (TBI) in adolescence. Families of adolescents (aged 11-18 years) with moderate to severe TBI were recruited from the trauma registry of 2 tertiary-care children's hospitals and then randomly assigned to receive TOPS (n = 20), a cognitive-behavioral, skill-building intervention, or access to online resources regarding TBI (Internet resource comparison; n = 21). Parent and teen reports of EF were assessed at baseline and a posttreatment follow-up (mean = 7.88 months later). Improvements in self-reported EF skills were moderated by TBI severity, with teens with severe TBI in the TOPS treatment reporting significantly greater improvements than did those with severe TBI in the Internet resource comparison. The treatment groups did not differ on parent ratings of EF at the follow up. Findings suggest that TOPS may be effective in improving EF skills among teens with severe TBI.
Sayer, Nina A; Noorbaloochi, Siamak; Frazier, Patricia A; Pennebaker, James W; Orazem, Robert J; Schnurr, Paula P; Murdoch, Maureen; Carlson, Kathleen F; Gravely, Amy; Litz, Brett T
2015-10-01
We examined the efficacy of a brief, accessible, nonstigmatizing online intervention-writing expressively about transitioning to civilian life. U.S. Afghanistan and Iraq war veterans with self-reported reintegration difficulty (N = 1,292, 39.3% female, M = 36.87, SD = 9.78 years) were randomly assigned to expressive writing (n = 508), factual control writing (n = 507), or no writing (n = 277). Using intention to treat, generalized linear mixed models demonstrated that 6-months postintervention, veterans who wrote expressively experienced greater reductions in physical complaints, anger, and distress compared with veterans who wrote factually (ds = 0.13 to 0.20; ps reintegration difficulty compared with veterans who did not write at all (ds = 0.22 to 0.35; ps ≤ .001). Veterans who wrote expressively also experienced greater improvement in social support compared to those who did not write (d = 0.17). Relative to both control conditions, expressive writing did not lead to improved life satisfaction. Secondary analyses also found beneficial effects of expressive writing on clinically significant distress, PTSD screening, and employment status. Online expressive writing holds promise for improving health and functioning among veterans experiencing reintegration difficulty, albeit with small effect sizes.
Rachel Baker
2016-10-01
Full Text Available An increasing number of students are taking classes offered online through open-access platforms; however, the vast majority of students who start these classes do not finish. The incongruence of student intentions and subsequent engagement suggests that self-control is a major contributor to this stark lack of persistence. This study presents the results of a large-scale field experiment (N = 18,043 that examines the effects of a self-directed scheduling nudge designed to promote student persistence in a massive open online course. We find that random assignment to treatment had no effects on near-term engagement and weakly significant negative effects on longer-term course engagement, persistence, and performance. Interestingly, these negative effects are highly concentrated in two groups of students: those who registered close to the first day of class and those with .edu e-mail addresses. We consider several explanations for these findings and conclude that theoretically motivated interventions may interact with the diverse motivations of individual students in possibly unintended ways.
Wasantha P. Jayawardene, MD, PhD
2017-03-01
Full Text Available Empirical evidence suggested that mind-body interventions can be effectively delivered online. This study aimed to examine whether preventive online mindfulness interventions (POMI for non-clinical populations improve short- and long-term outcomes for perceived-stress (primary and mindfulness (secondary. Systematic search of four electronic databases, manuscript reference lists, and journal content lists was conducted in 2016, using 21 search-terms. Eight randomized controlled trials (RCTs evaluating effects of POMI in non-clinical populations with adequately reported perceived-stress and mindfulness measures pre- and post-intervention were included. Random-effects models utilized for all effect-size estimations with meta-regression performed for mean age and %females. Participants were volunteers (adults; predominantly female from academic, workplace, or community settings. Most interventions utilized simplified Mindfulness-Based Stress Reduction protocols over 2–12 week periods. Post-intervention, significant medium effect found for perceived-stress (g = 0.432, with moderate heterogeneity and significant, but small, effect size for mindfulness (g = 0.275 with low heterogeneity; highest effects were for middle-aged individuals. At follow-up, significant large effect found for perceived-stress (g = 0.699 with low heterogeneity and significant medium effect (g = 0.466 for mindfulness with high heterogeneity. No publication bias was found for perceived-stress; publication bias found for mindfulness outcomes led to underestimation of effects, not overestimation. Number of eligible RCTs was low with inadequate data reporting in some studies. POMI had substantial stress reduction effects and some mindfulness improvement effects. POMI can be a more convenient and cost-effective strategy, compared to traditional face-to-face interventions, especially in the context of busy, hard-to-reach, but digitally-accessible populations.
Evaluation of Laser Based Alignment Algorithms Under Additive Random and Diffraction Noise
McClay, W A; Awwal, A; Wilhelmsen, K; Ferguson, W; McGee, M; Miller, M
2004-09-30
The purpose of the automatic alignment algorithm at the National Ignition Facility (NIF) is to determine the position of a laser beam based on the position of beam features from video images. The position information obtained is used to command motors and attenuators to adjust the beam lines to the desired position, which facilitates the alignment of all 192 beams. One of the goals of the algorithm development effort is to ascertain the performance, reliability, and uncertainty of the position measurement. This paper describes a method of evaluating the performance of algorithms using Monte Carlo simulation. In particular we show the application of this technique to the LM1{_}LM3 algorithm, which determines the position of a series of two beam light sources. The performance of the algorithm was evaluated for an ensemble of over 900 simulated images with varying image intensities and noise counts, as well as varying diffraction noise amplitude and frequency. The performance of the algorithm on the image data set had a tolerance well beneath the 0.5-pixel system requirement.
Craciun, Catrinel; Schüz, Natalie; Lippke, Sonia; Schwarzer, Ralf
2012-03-01
This study compares a motivational skin cancer prevention approach with a volitional planning and self-efficacy intervention to enhance regular sunscreen use. A randomized controlled trial (RCT) was conducted with 205 women (mean age 25 years) in three groups: motivational; volitional; and control. Sunscreen use, action planning, coping planning and coping self-efficacy were assessed at three points in time. The volitional intervention improved sunscreen use. Coping planning emerged as the only mediator between the intervention and sunscreen use at Time 3. Findings point to the role played by coping planning as an ingredient of sun protection interventions.
Feng, Ju; Shen, Wen Zhong; Xu, Chang
2016-09-01
A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.
Fales, Jessica; Palermo, Tonya M; Law, Emily F; Wilson, Anna C
2015-01-01
Sleep disturbances are commonly reported in youth with chronic pain. We examined whether online cognitive-behavioral therapy (CBT) for pain management would impact youth's sleep. Subjective sleep quality and actigraphic sleep were evaluated in 33 youth (M = 14.8 years; 70% female) with chronic pain participating in a larger randomized controlled trial of online-CBT. The Internet treatment condition (n = 17) received 8-10 weeks of online-CBT + standard care, and the wait-list control condition (n = 16) continued with standard care. Although pain improved with online-CBT, no changes were observed in sleep outcomes. Shorter pretreatment sleep duration was associated with less improvement in posttreatment functioning. Findings underscore the need for further development in psychological therapies to more intensively target sleep loss in youth with chronic pain.
Onoma, D P; Ruan, S; Thureau, S; Nkhali, L; Modzelewski, R; Monnehan, G A; Vera, P; Gardin, I
2014-12-01
A segmentation algorithm based on the random walk (RW) method, called 3D-LARW, has been developed to delineate small tumors or tumors with a heterogeneous distribution of FDG on PET images. Based on the original algorithm of RW [1], we propose an improved approach using new parameters depending on the Euclidean distance between two adjacent voxels instead of a fixed one and integrating probability densities of labels into the system of linear equations used in the RW. These improvements were evaluated and compared with the original RW method, a thresholding with a fixed value (40% of the maximum in the lesion), an adaptive thresholding algorithm on uniform spheres filled with FDG and FLAB method, on simulated heterogeneous spheres and on clinical data (14 patients). On these three different data, 3D-LARW has shown better segmentation results than the original RW algorithm and the three other methods. As expected, these improvements are more pronounced for the segmentation of small or tumors having heterogeneous FDG uptake.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Algorithm for Tree Growth Modeling Based on Random Parameters and ARMA
Lichun Jiang
2013-08-01
Full Text Available Chapman-Richards function is used to model growth data of dahurian larch (Larix gmelinii Rupr. from longitudinal measurements using nonlinear mixed-effects modeling approach. The parameter variation in the model was divided into random effects, fixed effects and variance-covariance structure. The values for fixed effects parameters and the variance-covariance matrix of random effects were estimated using NLME function in S-plus software. Autocorrelation structure was considered for explaining the dependency among multiple measurements within the individuals. Information criterion statistics (AIC, BIC and Likelihood ratio test are used for comparing different structures of the random effects components. These methods are illustrated using the nonlinear mixed-effects methods in S-Plus software. Results showed that the Chapman-Richards model with three random parameters could typically depict the dahurian larch tree growth in northeastern China. The mixed-effects model provided better performance and more precise estimations than the fixed-effects model.
Frisch, Anne-Linda; Camerini, Luca; Schulz, Peter J
2013-01-01
The Internet plays an increasingly important role in health education, providing laypeople with information about health-related topics that range from disease-specific contexts to general health promotion. Compared to traditional health education, the Internet allows the use of multimedia applications that offer promise to enhance individuals' health knowledge and literacy. This study aims at testing the effect of multimedia presentation of health information on learning. Relying on an experimental design, it investigates how retention of information differs for text-only presentation, image-only presentation, and multimedia (text and image) presentation of online health information. Two hundred and forty students were randomly assigned to four groups each exposed to a different website version. Three groups were exposed to the same information using text only, image only, or text and image presentation. A fourth group received unrelated information (control group). Retention was assessed by the means of a recognition test. To examine a possible interaction between website version and recognition test, half of the students received a recognition test in text form and half of them received a recognition test in imagery form. In line with assumptions from Dual Coding Theory, students exposed to the multimedia (text and image) presentation recognized significantly more information than students exposed to the text-only presentation. This did not hold for students exposed to the image-only presentation. The impact of presentation style on retention scores was moderated by the way retention was assessed for image-only presentation, but not for text-only or multimedia presentation. Possible explanations and implications for the design of online health education interventions are discussed.
Myers, Nicholas D; Prilleltensky, Isaac; Prilleltensky, Ora; McMahon, Adam; Dietz, Samantha; Rubenstein, Carolyn L
2017-03-16
Subjective well-being refers to people's level of satisfaction with life as a whole and with multiple dimensions within it. Interventions that promote subjective well-being are important because there is evidence that physical health, mental health, substance use, and health care costs may be related to subjective well-being. Fun For Wellness (FFW) is a new online universal intervention designed to promote growth in multiple dimensions of subjective well-being. The purpose of this study was to provide an initial evaluation of the efficacy of FFW to increase subjective well-being in multiple dimensions in a universal sample. The study design was a prospective, double-blind, parallel group randomized controlled trial. Data were collected at baseline and 30 and 60 days-post baseline. A total of 479 adult employees at a major university in the southeast of the USA were enrolled. Recruitment, eligibility verification, and data collection were conducted online. Measures of interpersonal, community, occupational, physical, psychological, economic (i.e., I COPPE), and overall subjective well-being were constructed based on responses to the I COPPE Scale. A two-class linear regression model with complier average causal effect estimation was imposed for each dimension of subjective well-being. Participants who complied with the FFW intervention had significantly higher subjective well-being, as compared to potential compliers in the Usual Care group, in the following dimensions: interpersonal at 60 days, community at 30 and 60 days, psychological at 60 days, and economic at 30 and 60 days. Results from this study provide some initial evidence for both the efficacy of, and possible revisions to, the FFW intervention.
Feng, Ju; Shen, Wen Zhong; Xu, Chang
2016-01-01
A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximi...
Frasca, Paolo; Ishii, Hideaki; Ravazzi, Chiara; Tempo, Roberto
2015-01-01
In this tutorial paper, we study three specific applications: opinion formation in social networks, centrality measures in complex networks and estimation problems in large-scale power systems. These applications fall under a general framework which aims at the construction of algorithms for distrib
Sabina Hirshfield
Full Text Available As HIV infection continues unabated, there is a need for effective interventions targeting at-risk men who have sex with men (MSM. Engaging MSM online where they meet sexual partners is critical for HIV prevention efforts.A randomized controlled trial (RCT conducted online among U.S. MSM recruited from several gay sexual networking websites assessed the impact of 2 HIV prevention videos and an HIV prevention webpage compared to a control condition for the study outcomes HIV testing, serostatus disclosure, and unprotected anal intercourse (UAI at 60-day follow-up. Video conditions were pooled due to reduced power from low retention (53%, n = 1,631. No participant incentives were provided.Follow-up was completed by 1,631 (53% of 3,092 eligible men. In the 60 days after the intervention, men in the pooled video condition were significantly more likely than men in the control to report full serostatus disclosure ('asked and told' with their last sexual partner (OR 1.32, 95% CI 1.01-1.74. Comparing baseline to follow-up, HIV-negative men in the pooled video (OR 0.70, 95% CI 0.54-0.91 and webpage condition (OR 0.43, 95% CI 0.25-0.72 significantly reduced UAI at follow-up. HIV-positive men in the pooled video condition significantly reduced UAI (OR 0.38, 95% CI 0.20-0.67 and serodiscordant UAI (OR 0.53, 95% CI 0.28-0.96 at follow-up.Findings from this online RCT of MSM recruited from sexual networking websites suggest that a low cost, brief digital media intervention designed to engage critical thinking can increase HIV disclosure to sexual partners and decrease sexual risk. Effective, brief HIV prevention interventions featuring digital media that are made widely available may serve as a complementary part of an overall behavioral and biomedical strategy for reducing sexual risk by addressing the specific needs and circumstances of the target population, and by changing individual knowledge, motivations, and community norms.ClinicalTrials.gov NCT
SLAM algorithm based on random finite set%基于随机有限集的SLAM算法
杜航原; 赵玉新; 杨永鹏; 韩庆楠
2012-01-01
提出一种基于随机有限集的同步定位与地图创建算法,该算法利用随机有限集对环境地图和传感器观测信息建模,建立联合目标状态变量的随机有限集.依据Bayesian估计框架,利用概率假设密度滤波的粒子滤波实现对机器人位姿和环境地图进行同时估计.新算法避免了数据关联过程,并能更加自然有效地表达同步定位与地图创建(simultaneous localization and mapping,SLAM)问题中多特征-多观测特性及多种传感器信息.在仿真实验中,利用FastSLAM2.0算法和新算法进行对比,实验结果验证了新算法的优越性.%A novel simultaneous localization and mapping (SLAM) algorithm based on the random finite set (RFS) theory is proposed,it models environmental map and sensor observations with RFS,and establishes the RFS of joint target state variable.The algorithm framework is Based on Bayesian estimator,uses a probability hypothesis density filter which is realized by particle filter to estimate robot's poses and environmental map simultaneously.The new algorithm avoids the data association and describes the multifeature-multiobserve characteristics more accurately and naturally as well as multiple sensor information.Simulations are presented to compare the performance of the new algorithm with that of the FastSLAM 2.0,the simulation results verify the superiority of the new algorithm.
Mass weighted urn design--A new randomization algorithm for unequal allocations.
Zhao, Wenle
2015-07-01
Unequal allocations have been used in clinical trials motivated by ethical, efficiency, or feasibility concerns. Commonly used permuted block randomization faces a tradeoff between effective imbalance control with a small block size and accurate allocation target with a large block size. Few other unequal allocation randomization designs have been proposed in literature with applications in real trials hardly ever been reported, partly due to their complexity in implementation compared to the permuted block randomization. Proposed in this paper is the mass weighted urn design, in which the number of balls in the urn equals to the number of treatments, and remains unchanged during the study. The chance a ball being randomly selected is proportional to the mass of the ball. After each treatment assignment, a part of the mass of the selected ball is re-distributed to all balls based on the target allocation ratio. This design allows any desired optimal unequal allocations be accurately targeted without approximation, and provides a consistent imbalance control throughout the allocation sequence. The statistical properties of this new design is evaluated with the Euclidean distance between the observed treatment distribution and the desired treatment distribution as the treatment imbalance measure; and the Euclidean distance between the conditional allocation probability and the target allocation probability as the allocation predictability measure. Computer simulation results are presented comparing the mass weighted urn design with other randomization designs currently available for unequal allocations.
2017-01-01
Background Most patients with diabetes mellitus (DM) are followed by primary care physicians, who often lack knowledge or confidence to prescribe insulin properly. This contributes to clinical inertia and poor glycemic control. Effectiveness of traditional continuing medical education (CME) to solve that is limited, so new approaches are required. Electronic games are a good option, as they can be very effective and easily disseminated. Objective The objective of our study was to assess applicability, user acceptance, and educational effectiveness of InsuOnline, an electronic serious game for medical education on insulin therapy for DM, compared with a traditional CME activity. Methods Primary care physicians (PCPs) from South of Brazil were invited by phone or email to participate in an unblinded randomized controlled trial and randomly allocated to play the game InsuOnline, installed as an app in their own computers, at the time of their choice, with minimal or no external guidance, or to participate in a traditional CME session, composed by onsite lectures and cases discussion. Both interventions had the same content and duration (~4 h). Applicability was assessed by the number of subjects who completed the assigned intervention in each group. Insulin-prescribing competence (factual knowledge, problem-solving skills, and attitudes) was self-assessed through a questionnaire applied before, immediately after, and 3 months after the interventions. Acceptance of the intervention (satisfaction and perceived importance for clinical practice) was also assessed immediately after and 3 months after the interventions, respectively. Results Subjects’ characteristics were similar between groups (mean age 38, 51.4% [69/134] male). In the game group, 69 of 88 (78%) completed the intervention, compared with 65 of 73 (89%) in the control group, with no difference in applicability. Percentage of right answers in the competence subscale, which was 52% at the baseline in both
Hasuike, Takashi; Katagiri, Hideki
2010-10-01
This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.
AN EVENT DRIVEN SIMULATION FOR ADAPTIVE GENTLE RANDOM EARLY DETECTION (AGRED ALGORITHM
Omid Seifaddini
2014-01-01
Full Text Available Simulations are used to find optimum answers for problems in wide areas. Active queue management algorithms such as RED, GRED, typically use simulators like ns2 which is an open source simulator or OPNET, OMNET which are commercial simulators. However, beside the benefits of using simulators like having defined modules, parameters. There are problems such as complexity, large integrated components and licensing cost. To have an ideal balance in mentioned benefits and problems and to further complement the repository of simulators, this study presents the description of a general-purpose programming language based discrete event simulation for active queue management. This research has focused at developing a discrete event simulator to implement one of active queue management algorithms which is called AGRED. The results showed that the developed simulator has successfully produced the same results with an average deviation of 1.5% as previous simulator in AGRED.
肖鸣宇; 沈正翔
2014-01-01
analysis of the off-line problems, it proposes a 2-competitive online algorithm for the regular cost subproblem and proves the optimality of the competitive ratio. Based on the online algorithm of the regular cost subproblem, It presents a 4-competitive online algorithm for the ski-rental problem with multiple discount options that is also optimal. Some experimental results are also given to show the effectiveness of the algorithms when running on some real data and random data.
Friend Recommendation Algorithm Based on Mixed Graph in Online Social Networks%基于混合图的在线社交网络朋友推荐算法
俞琰; 邱广华; 陈爱萍
2011-01-01
针对在线社交网络朋友推荐问题，尝试融合多个社会网络为一个混合图模型，采用基于混合图模型的重启动随机游走算法，为用户提供个性化的朋友推荐，并通过参数调节多个网络的权重。实验表明，该算法提高了在线社交网络朋友推荐的准确性。%Aiming at the friend recommendation in online social networks, this paper tries to fuse multiple social net-works into one mixed graph on which the random walk with restart is implemented to provide personalized friend recomendation for users. The different roles of these networks are adjusted by parameters. Experiment demonstrates that this algorithm can improve the accuracy of friend recommendation in online social networks.
Algorithms for White-box Obfuscation Using Randomized Subcircuit Selection and Replacement
2008-03-27
architecture A.1.1 Functionality. CORGI is a Java application which employs a model- view-controller ( MVC ) architecture . In Figure A.1 (page 71), the...software . . . . . . . . . . . . . . . . . . . . . . 70 A.1 CORGI architecture . . . . . . . . . . . . . . . . . . . . 70 A.1.1 Functionality...accomplish two objectives with this research. 1. Develop a software architecture for developing and testing random selection schema for obfuscating a
[Segmentation of Winter Wheat Canopy Image Based on Visual Spectral and Random Forest Algorithm].
Liu, Ya-dong; Cui, Ri-xian
2015-12-01
Digital image analysis has been widely used in non-destructive monitoring of crop growth and nitrogen nutrition status due to its simplicity and efficiency. It is necessary to segment winter wheat plant from soil background for accessing canopy cover, intensity level of visible spectrum (R, G, and B) and other color indices derived from RGB. In present study, according to the variation in R, G, and B components of sRGB color space and L*, a*, and b* components of CIEL* a* b* color space between wheat plant and soil background, the segmentation of wheat plant from soil background were conducted by the Otsu's method based on a* component of CIEL* a* b* color space, and RGB based random forest method, and CIEL* a* b* based random forest method, respectively. Also the ability to segment wheat plant from soil background was evaluated with the value of segmentation accuracy. The results showed that all three methods had revealed good ability to segment wheat plant from soil background. The Otsu's method had lowest segmentation accuracy in comparison with the other two methods. There were only little difference in segmentation error between the two random forest methods. In conclusion, the random forest method had revealed its capacity to segment wheat plant from soil background with only the visual spectral information of canopy image without any color components combinations or any color space transformation.
An Overview on Randomized Algorithms for Analysis and Control of Uncertain Systems
2003-05-01
IEEE Transactions on Automatic Control 46, to appear [14] Calafiore G... IEEE Transactions on Automatic Control 39 1971–1077 [31] Polyak B.T. and Shcherbakov P.S. (2000) Random Spherical Uncertainty in Estimation and...Robustness. IEEE Transactions on Automatic Control 45, to appear [32] Polyak B.T. and Tempo R. (2000) Probabilistic Robust Design with Linear
ALTINOZ, O. T.
2014-08-01
Full Text Available Nature-inspired optimization algorithms can obtain the optima by updating the position of each member in the population. At the beginning of the algorithm, the particles of the population are spread into the search space. The initial distribution of particles corresponds to the beginning points of the search process. Hence, the aim is to alter the position for each particle beginning with this initial position until the optimum solution will be found with respect to the pre-determined conditions like maximum iteration, and specific error value for the fitness function. Therefore, initial positions of the population have a direct effect on both accuracy of the optima and the computational cost. If any member in the population is close enough to the optima, this eases the achievement of the exact solution. On the contrary, individuals grouped far away from the optima might yield pointless efforts. In this study, low-discrepancy quasi-random number sequence is preferred for the localization of the population at the initialization phase. By this way, the population is distributed into the search space in a more uniform manner at the initialization phase. The technique is applied to the Gravitational Search Algorithm and compared via the performance on benchmark function solutions.
Tamascelli, D; Plenio, M B
2015-01-01
When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the Time-Evolving Block-Decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the Singular Value Decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied to and demonstrate that for those system RRSVD delivers results as accurate as state-of-the-art deterministi...
Polan, D [University of Michigan, Ann Arbor, MI (United States); Brady, S; Kaufman, R [St. Jude Children’s Research Hospital, Memphis, TN (United States)
2016-06-15
Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using an anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT
Diehl, Leandro Arthur; Souza, Rodrigo Martins; Gordan, Pedro Alejandro; Esteves, Roberto Zonato; Coelho, Izabel Cristina Meister
2017-03-09
Most patients with diabetes mellitus (DM) are followed by primary care physicians, who often lack knowledge or confidence to prescribe insulin properly. This contributes to clinical inertia and poor glycemic control. Effectiveness of traditional continuing medical education (CME) to solve that is limited, so new approaches are required. Electronic games are a good option, as they can be very effective and easily disseminated. The objective of our study was to assess applicability, user acceptance, and educational effectiveness of InsuOnline, an electronic serious game for medical education on insulin therapy for DM, compared with a traditional CME activity. Primary care physicians (PCPs) from South of Brazil were invited by phone or email to participate in an unblinded randomized controlled trial and randomly allocated to play the game InsuOnline, installed as an app in their own computers, at the time of their choice, with minimal or no external guidance, or to participate in a traditional CME session, composed by onsite lectures and cases discussion. Both interventions had the same content and duration (~4 h). Applicability was assessed by the number of subjects who completed the assigned intervention in each group. Insulin-prescribing competence (factual knowledge, problem-solving skills, and attitudes) was self-assessed through a questionnaire applied before, immediately after, and 3 months after the interventions. Acceptance of the intervention (satisfaction and perceived importance for clinical practice) was also assessed immediately after and 3 months after the interventions, respectively. Subjects' characteristics were similar between groups (mean age 38, 51.4% [69/134] male). In the game group, 69 of 88 (78%) completed the intervention, compared with 65 of 73 (89%) in the control group, with no difference in applicability. Percentage of right answers in the competence subscale, which was 52% at the baseline in both groups, significantly improved
一种在线学习的视频图像分割算法%Online Learning Based Video Segmentation Algorithm
王爱平; 潘衡岳; 李思昆
2012-01-01
提出了一种在线学习的视频图像分割算法，通过结合视频图像的全局信息和局部信息，来完成视频图像的准确分割。该算法首先采用分类器对视频图像的无指导预分割结果进行整体的识别处理，得到粗糙的像素级前后景分割图像，再通过时空条件随机场最优化完成局部平滑处理，得到最终精确的像素级前后景分割图像。同时还提出了一种平衡采样策略和一种基于分割图像指导的样本更新算法，分别用以实现分类器准确的初始化和高效稳定的在线学习。基于真实视频序列的实验表明，相比已有方法，算法在低时间开销下，显著提高了分割的准确性与稳定性。%A novel online learning based video segmentation algorithm was proposed, combining both the global and local information of video images. The videos were pre-segmented by the unsupervised image segmentation method firstly, and then the coarse foreground was extracted by the detection of the classifier. After that, the final optimal pixel-wise segmentation was achieved by using spatial-temporal Conditional Random Fields, and the classifier was updated with the constraints of the segmentation result. Meanwhile, a balance sampling strategy and a sample-updating approach supervised by segmentation were proposed, to improve the accuracy and stability of the classifier on initialization and updating separately. Experiments on challenging video sequences show that the proposed method highly improves the precision and the stability of video segmentation with low time cost, compared to state-of-the-art methods.
A new logistic dynamic particle swarm optimization algorithm based on random topology.
Ni, Qingjian; Deng, Jianming
2013-01-01
Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.
The adaptive dynamic community detection algorithm based on the non-homogeneous random walking
Xin, Yu; Xie, Zhi-Qiang; Yang, Jing
2016-05-01
With the changing of the habit and custom, people's social activity tends to be changeable. It is required to have a community evolution analyzing method to mine the dynamic information in social network. For that, we design the random walking possibility function and the topology gain function to calculate the global influence matrix of the nodes. By the analysis of the global influence matrix, the clustering directions of the nodes can be obtained, thus the NRW (Non-Homogeneous Random Walk) method for detecting the static overlapping communities can be established. We design the ANRW (Adaptive Non-Homogeneous Random Walk) method via adapting the nodes impacted by the dynamic events based on the NRW. The ANRW combines the local community detection with dynamic adaptive adjustment to decrease the computational cost for ANRW. Furthermore, the ANRW treats the node as the calculating unity, thus the running manner of the ANRW is suitable to the parallel computing, which could meet the requirement of large dataset mining. Finally, by the experiment analysis, the efficiency of ANRW on dynamic community detection is verified.
Huang, Alison J.; Hess, Rachel; Arya, Lily A.; Richter, Holly E.; Subak, Leslee L.; Bradley, Catherine S.; Rogers, Rebecca G.; Myers, Deborah L.; Johnson, Karen C.; Gregory, W. Thomas; Kraus, Stephen R.; Schembri, Michael; Brown, Jeanette S.
2013-01-01
Objective The purpose of this study was to evaluate clinical outcomes associated with the initiation of treatment for urgency-predominant incontinence in women diagnosed by a simple 3-item questionnaire. Study Design We conducted a multicenter, double-blinded, 12-week randomized trial of pharmacologic therapy for urgency-predominant incontinence in ambulatory women diagnosed by the simple 3-item questionnaire. Participants (N = 645) were assigned randomly to fesoterodine therapy (4-8 mg daily) or placebo. Urinary incontinence was assessed with the use of voiding diaries; postvoid residual volume was measured after treatment. Results After 12 weeks, women who had been assigned randomly to fesoterodine therapy reported 0.9 fewer urgency and 1.0 fewer total incontinence episodes/day, compared with placebo (P ≤ .001). Four serious adverse events occurred in each group, none of which was related to treatment. No participant had postvoid residual volume of ≥250 mL after treatment. Conclusion Among ambulatory women with urgency-predominant incontinence diagnosed with a simple 3-item questionnaire, pharmacologic therapy resulted in a moderate decrease in incontinence frequency without increasing significant urinary retention or serious adverse events, which provides support for a streamlined algorithm for diagnosis and treatment of female urgency-predominant incontinence. PMID:22542122
Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm
Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne
2010-02-01
Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.
张卫国; 张永; 徐维军; 杨兴雨
2013-01-01
On the basis of the depreciable equipment' s characters, this paper first studies the online algorithm which obtains the equipment only by leasing or buying. The diversification of commodities makes the equipments which have the same function have different depreciations and purchase prices and transactions. From this view point, this paper further presents the random transformation strategy to resolve the online leasing of depreciable equipment. It transforms the randomized choices of several depreciable equipments to that of two equipments. Compared with previous strategies, the introductions of depreciation and random transformation strategy make the competitive ratio decrease and the competitive performance is improved.%基于可折旧设备在线租赁的特点,首先研究了随机选择租赁和购买两种设备获得方式的可折旧设备在线竞争算法.商品的多样化使得具有同种功能的设备往往具有不同的折旧和购买价格,针对这一特点,进一步提出了转化随机策略,用来解决随机选择多种设备获得方式的可折旧设备在线租赁问题.转化随机策略将随机选择多种设备获得方式转化为随机选择两种设备获得方式,并得到了与折旧相关的竞争比上界.与经典租赁问题的随机性策略相比,折旧的引入和转化随机策略的提出使得竞争比进一步减小,进而竞争性能得到提高.
余世明; 王海清
2003-01-01
An improved generalized predictive control algorithm is presented in this paper by incorporating offline identification into onlie identification.Unlike the existing generalized predictive control algorithms.the proposed approach divides parameters of a predictive model into the time invariant and time-varying ones,which are treated respectively by offline and onlie identification algorithms.Therefore,both the reliability and accuracy of the predictive model are improved,Two simulation examples of control of a fixed bed reactor show that this new algorithm is not only reliable and stable in the case of uncertainties and abnormal distrubances,but also adaptable to slow time varying processes.
Gottlieb, Michael M; Arenillas, David J; Maithripala, Savanie; Maurer, Zachary D; Tarailo Graovac, Maja; Armstrong, Linlea; Patel, Millan; van Karnebeek, Clara; Wasserman, Wyeth W
2015-04-01
Advances in next-generation sequencing (NGS) technologies have helped reveal causal variants for genetic diseases. In order to establish causality, it is often necessary to compare genomes of unrelated individuals with similar disease phenotypes to identify common disrupted genes. When working with cases of rare genetic disorders, finding similar individuals can be extremely difficult. We introduce a web tool, GeneYenta, which facilitates the matchmaking process, allowing clinicians to coordinate detailed comparisons for phenotypically similar cases. Importantly, the system is focused on phenotype annotation, with explicit limitations on highly confidential data that create barriers to participation. The procedure for matching of patient phenotypes, inspired by online dating services, uses an ontology-based semantic case matching algorithm with attribute weighting. We evaluate the capacity of the system using a curated reference data set and 19 clinician entered cases comparing four matching algorithms. We find that the inclusion of clinician weights can augment phenotype matching.
Self-Tuning Random Early Detection Algorithm to Improve Performance of Network Transmission
Jianyong Chen
2011-01-01
Full Text Available We use a discrete-time dynamical feedback system model of TCP/RED to study the performance of Random Early Detection (RED for different values of control parameters. Our analysis shows that the queue length is able to keep stable at a given target if the maximum probability pmax and exponential averaging weight w satisfy some conditions. From the mathematical analysis, a new self-tuning RED is proposed to improve the performance of TCP-RED network. The appropriate pmax is dynamically obtained according to history information of both pmax and the average queue size in a period of time. And w is properly chosen according to a linear stability condition of the average queue length. From simulations with ns-2, it is found that the self-tuning RED is more robust to stabilize queue length in terms of less deviation from the target and smaller fluctuation amplitude, compared to adaptive RED, Random Early Marking (REM, and Proportional-Integral (PI controller.
Minimum-energy broadcast in random-grid ad-hoc networks: approximation and distributed algorithms
Calamoneri, Tiziana; Monti, Angelo; Rossi, Gianluca; Silvestri, Riccardo
2008-01-01
The Min Energy broadcast problem consists in assigning transmission ranges to the nodes of an ad-hoc network in order to guarantee a directed spanning tree from a given source node and, at the same time, to minimize the energy consumption (i.e. the energy cost) yielded by the range assignment. Min energy broadcast is known to be NP-hard. We consider random-grid networks where nodes are chosen independently at random from the $n$ points of a $\\sqrt n \\times \\sqrt n$ square grid in the plane. The probability of the existence of a node at a given point of the grid does depend on that point, that is, the probability distribution can be non-uniform. By using information-theoretic arguments, we prove a lower bound $(1-\\epsilon) \\frac n{\\pi}$ on the energy cost of any feasible solution for this problem. Then, we provide an efficient solution of energy cost not larger than $1.1204 \\frac n{\\pi}$. Finally, we present a fully-distributed protocol that constructs a broadcast range assignment of energy cost not larger tha...
Online learning algorithm of Gaussian process based on adaptive nature gradient%基于自适应自然梯度法的在线高斯过程建模
申倩倩; 孙宗海
2011-01-01
In order to satisfy the online modeling algorithm' s request of real-time, this paper proposed the adaptive natural gradient method used in online Gaussian process training.The algorithm was named online learning algorithm of Gaussian process based on adaptive nature gradient.The algorithm was applied in Micky-Glass system and continuous stirred tank reactor (CSTR) modeling,and compared with the sparse online Gaussian processes algorithm.Obtained from the simulation results,this algorithm meets the real-time and accuracy requirements of nonlinear system modeling, and overcomes other online algorithms' faults of needing much computation resource and not to accord with the requirement of real-time of online algorithm.%为了满足在线建模算法的实时性要求,提出了在高斯过程的训练中使用自适应自然梯度法(ANG),即基于自适应自然梯度法的在线高斯过程回归建模算法.将此算法运用在Micky-Glass系统和连续搅拌反应釜(CSTR)模型的建立中,并与稀疏在线高斯过程算法进行比较.仿真结果表明此算法满足了非线性系统建模的实时性和精度的要求,同时克服了其他方法计算量很大、不符合在线算法的实时性要求的缺点.
Akkoç, Betül; Arslan, Ahmet; Kök, Hatice
2016-06-01
Gender is one of the intrinsic properties of identity, with performance enhancement reducing the cluster when a search is performed. Teeth have durable and resistant structure, and as such are important sources of identification in disasters (accident, fire, etc.). In this study, gender determination is accomplished by maxillary tooth plaster models of 40 people (20 males and 20 females). The images of tooth plaster models are taken with a lighting mechanism set-up. A gray level co-occurrence matrix of the image with segmentation is formed and classified via a Random Forest (RF) algorithm by extracting pertinent features of the matrix. Automatic gender determination has a 90% success rate, with an applicable system to determine gender from maxillary tooth plaster images. Copyright © 2016 Elsevier Ltd. All rights reserved.
Paul Wallace
Full Text Available BACKGROUND: Interventions delivered via the Internet have the potential to address the problem of hazardous alcohol consumption at minimal incremental cost, with potentially major public health implications. It was hypothesised that providing access to a psychologically enhanced website would result in greater reductions in drinking and related problems than giving access to a typical alcohol website simply providing information on potential harms of alcohol. DYD-RCT Trial registration: ISRCTN 31070347. METHODOLOGY/PRINCIPAL FINDINGS: A two-arm randomised controlled trial was conducted entirely on-line through the Down Your Drink (DYD website. A total of 7935 individuals who screened positive for hazardous alcohol consumption were recruited and randomized. At entry to the trial, the geometric mean reported past week alcohol consumption was 46.0 (SD 31.2 units. Consumption levels reduced substantially in both groups at the principal 3 month assessment point to an average of 26.0 (SD 22.3 units. Similar changes were reported at 1 month and 12 months. There were no significant differences between the groups for either alcohol consumption at 3 months (intervention: control ratio of geometric means 1.03, 95% CI 0.97 to 1.10 or for this outcome and the main secondary outcomes at any of the assessments. The results were not materially changed following imputation of missing values, nor was there any evidence that the impact of the intervention varied with baseline measures or level of exposure to the intervention. CONCLUSIONS/SIGNIFICANCE: Findings did not provide support for the hypothesis that access to a psychologically enhanced website confers additional benefit over standard practice and indicate the need for further research to optimise the effectiveness of Internet-based behavioural interventions. The trial demonstrates a widespread and potentially sustainable demand for Internet based interventions for people with hazardous alcohol consumption
Mickevicius, Nikolai J.; Paulson, Eric S.
2017-04-01
The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.
Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri
2017-06-01
Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.
Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo
2012-06-01
Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.
Diehl, Leandro Arthur; Souza, Rodrigo Martins; Alves, Juliano Barbosa; Gordan, Pedro Alejandro; Esteves, Roberto Zonato; Jorge, Maria Lúcia Silva Germano; Coelho, Izabel Cristina Meister
2013-01-21
Physicians´ lack of knowledge contributes to underuse of insulin and poor glycemic control in adults with diabetes mellitus (DM). Traditional continuing medical education have limited efficacy, and new approaches are required. We report the design of a trial to assess the educational efficacy of InsuOnline, a game for education of primary care physicians (PCPs). The goal of InsuOnline was to improve appropriate initiation and adjustment of insulin for the treatment of DM. InsuOnline was designed to be educationally adequate, self-motivating, and attractive. A multidisciplinary team of endocrinologists, experts in medical education, and programmers, was assembled for the design and development of InsuOnline. Currently, we are conducting usability and playability tests, with PCPs and medical students playing the game on a desktop computer. Adjustments will be made based on these results. An unblinded randomized controlled trial with PCPs who work in the city of Londrina, Brazil, will be conducted to assess the educational validity of InsuOnline on the Web. In this trial, 64 PCPs will play InsuOnline, and 64 PCPs will undergo traditional instructional activities (lecture and group discussion). Knowledge on how to initiate and adjust insulin will be assessed by a Web-based multiple choice questionnaire, and attitudes regarding diabetes/insulin will be assessed by Diabetes Attitude Scale 3 at 3 time points-before, immediately after, and 6 months after the intervention. Subjects´ general impressions on the interventions will be assessed by a questionnaire. Software logs will be reviewed. To our knowledge, this is the first research with the aim of assessing the educational efficacy of a computer game for teaching PCPs about insulin therapy in DM. We describe the development criteria used for creating InsuOnline. Evaluation of the game using a randomized controlled trial design will be done in future studies. We demonstrated that the design and development of a game for
基于局部线性嵌入的随机森林算法%A Random Forest Algorithm Based on Locally Linear Embedding
陈树娟
2013-01-01
Random forest is an excellent classification algorithm, but random forest algorithm can not effectively judge re-dundant attributes, so the classification results are affected in data sets containing redundant attributes. To solve this problem, this paper presents the random forest algorithm based on local linear embedding. The algorithm uses the local linear embedding algorithm to reduce the dimensionality of the redundant attribute data set, and then uses the random forest algorithm to classify. The simulation experiments on the UCI standard data sets show that the algorithm proposed in this paper is an excellent classification algorithm for data set containing redundant attributes.% 随机森林是一种优秀的分类算法，然而随机森林算法不能有效的判断冗余属性，因此影响了在含有冗余属性的数据集上的分类效果。针对这一问题，本文提出了一种基于局部线性嵌入的随机森林算法。该算法利用局部线性嵌入算法对冗余属性数据集进行降维，然后利用随机森林算法进行分类学习。在UCI标准数据集上的仿真实验说明，本文算法是一种优秀的含冗余属性数据集分类算法。
Yang, Yue; Wang, Lei; Wu, Yongjiang; Liu, Xuesong; Bi, Yuan; Xiao, Wei; Chen, Yong
2017-07-01
There is a growing need for the effective on-line process monitoring during the manufacture of traditional Chinese medicine to ensure quality consistency. In this study, the potential of near infrared (NIR) spectroscopy technique to monitor the extraction process of Flos Lonicerae Japonicae was investigated. A new algorithm of synergy interval PLS with genetic algorithm (Si-GA-PLS) was proposed for modeling. Four different PLS models, namely Full-PLS, Si-PLS, GA-PLS, and Si-GA-PLS, were established, and their performances in predicting two quality parameters (viz. total acid and soluble solid contents) were compared. In conclusion, Si-GA-PLS model got the best results due to the combination of superiority of Si-PLS and GA. For Si-GA-PLS, the determination coefficient (Rp2) and root-mean-square error for the prediction set (RMSEP) were 0.9561 and 147.6544 μg/ml for total acid, 0.9062 and 0.1078% for soluble solid contents, correspondingly. The overall results demonstrated that the NIR spectroscopy technique combined with Si-GA-PLS calibration is a reliable and non-destructive alternative method for on-line monitoring of the extraction process of TCM on the production scale.
Modeling Slotted Aloha as a Stochastic Game with Random Discrete Power Selection Algorithms
Rachid El-Azouzi
2009-01-01
Full Text Available We consider the uplink case of a cellular system where bufferless mobiles transmit over a common channel to a base station, using the slotted aloha medium access protocol. We study the performance of this system under several power differentiation schemes. Indeed, we consider a random set of selectable transmission powers and further study the impact of priorities given either to new arrival packets or to the backlogged ones. Later, we address a general capture model where a mobile transmits successfully a packet if its instantaneous SINR (signal to interferences plus noise ratio is lager than some fixed threshold. Under this capture model, we analyze both the cooperative team in which a common goal is jointly optimized as well as the noncooperative game problem where mobiles reach to optimize their own objectives. Furthermore, we derive the throughput and the expected delay and use them as the objectives to optimize and provide a stability analysis as alternative study. Exhaustive performance evaluations were carried out, we show that schemes with power differentiation improve significantly the individual as well as global performances, and could eliminate in some cases the bi-stable nature of slotted aloha.
基于在线理论的股票算法交易策略研究%Study on the Stock Algorithmic Trading Strategy Based on Online Theory
朱莹; 茹少峰; 张文明
2015-01-01
运用在线理论研究多支股票算法交易策略。在El－Yaniv等人研究基础上，构造了单支股票买入问题的在线策略，证明该策略为最优在线策略；将构造的单支股票交易策略应用到多支股票交易策略问题中，设计了多支股票交易策略算法，并以每支股票收益加权进行投资组合；最后选择上证A股二十支股票从2009年到2012年的交易时间价格数据验证本文所提策略有效性。将20支股票随机抽取10支组成一组，选4组分别进行验证，结果表明本文所给策略对于任意选择的多支股票有较好收益。对交易周期分别选取10个偶数长度进行验证，发现交易周期为18天时平均收益最大，平均收益率为5．2％。%The online theory is used to study multi-stock algorithmic trading strategy .On the basis of El-Yaniv’s research , online buying strategy is established and proved to be the optimal online strategy;multi-stock algorith-mic trading strategy is designed and the investment portfolio is determined by weighting every stock yield with applying single stock trading strategy into multi-stock trading strategy .Transaction time data of twenty stocks , which are picked out of the A Stock of Shanghai Stock Exchange , is selected to test and verify the validity of the strategy mentioned in this paper .Ten stocks are randomly picked out of these twenty stocks to compose a group , and four groups are selected to be tested respectively , and the result indicates that the strategy proposed in this paper has better yield to any multi-stock.As for transaction cycle, ten even length is selected for test and the result implies the average yield will reach its maximum when the transaction cycle is eighteen and the average yield is 5.2%.
周柏; 陶卿; 储德军
2015-01-01
几乎所有的稀疏随机算法都来源于在线形式，只能获得平均输出方式的收敛速率，对于强凸优化问题无法达到最优的瞬时收敛速率.文中避开在线形式转到随机模式，直接研究随机优化算法.首先在含有 L1正则化项的稀疏优化问题中加入 L2正则化项，使之具有强凸特性.然后将黑箱优化方法中的随机步长策略引入到当前通用的结构优化算法 COMID 中，得到基于随机步长的混合正则化镜面下降稀疏随机优化算法.最后通过分析 L1正则化问题中软阈值方法的求解特点，证明算法具有最优的瞬时收敛速率.实验表明，文中算法的稀疏性优于 COMID.%Almost all sparse stochastic algorithms are developed from the online setting, and only the convergence rate of average output can be obtained. The optimal rate for strongly convex optimization problems can not be reached as well. The stochastic optimization algorithms are directly studied instead of the online to batch conversation in this paper. Firstly, by incorporating the L2 regularizer into the L1-regularized sparse optimization problems, the strong convexity can be obtained. Then, by introducing the random step-size strategy from the black-box optimization method to the state-of-the-art algorithm-composite objective mirror descent (COMID), a sparse stochastic optimization algorithm based introducing on random step-size hybrid regularized mirror descent ( RS-HMD) is achieved. Finally, based on the analysis of characteristics of soft threshold methods in solving the L1-regularized problem, the optimal individual convergence rate is proved. Experimental results demonstrate that sparsity of RS-HMD is better than that of COMID.
On-line Detection of Gas Pipeline Based on the Real-Time Algorithm and Network Technology with Robot
YAN Bo; YAN Guo-zheng; DING Guo-qing; ZHOU Bing; FU Xi-guang; ZUO Jian-yong
2004-01-01
The detection system integrates control technology, network technology, video encoding and decoding, video transmiss-ion, multi-single chip microcomputer communication, dat-abase technology, computer software and robot technology. The robot can adaptively adjust its status according to diameter (from 400 mm to 650 mm) of pipeline. The maximum detection distance is up to 1 000 m. The method of video coding in the system is based on fractal transformation. The experiments show that the coding scheme is fast and good PSNR. The precision of on-line detection is up to 3% thickness of pipeline wall. The robot can also have a high precision of location up to 0.03 m. The control method is based on network and characterized by on-line and real-time. The experiment in real gas pipeline shows that the performance of the detection system is good.
Muslim, A; Karyati, C M; Musa, P
2010-01-01
Multimedia data is a form of data that can represent all types of data (images, sound and text). The use of multimedia data for the online application requires a more comprehensive database in the use of storage media, Sorting / indexing, search and system / data searching. This is necessary in order to help providers and users to access multimedia data online. Systems that use of the index image as a reference requires storage media so that the rules and require special expertise to obtain the desired file. Changes in multimedia data into a series of stories / storyboard in the form of a text will help reduce the consumption of media storage, system index / sorting and search applications. Oriented Movement is one method that is being developed to change the form of multimedia data into a storyboard.
Hibert, Clement; Malet, Jean-Philippe; Provost, Floriane; Michéa, David; Geertsema, Marten
2017-04-01
Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with volumes below one millions of cubic meters. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. We present here the preliminary results of the application of this processing chain in two contexts: i) In Himalaya with the data acquired between 2002 and 2005 by the Hi-Climb network; ii) In Alaska using data recorded by the
轮对在线检测中Tsai异面标定算法研究%Tsai Non-coplanar Calibration Algorithm of Wheel Set Online Detection
吴开华; 陈康
2015-01-01
In wheel set online detection based on structured light projection image technique, the wheel set light-section curves acquired by CCD are heavily distorted. The camera calibration method and calibration precision will directly affect the wheel set online detection precision. According to the nonlinear distortion imaging model, Tsai non-coplanar calibration algorithm based on a moving planar calibration plate was proposed. According to the radial alignment constraint and first-order radial distortion formula, the least square method and Levenberg-Marquardt optimization algorithm were used to calibrate the internal and external parameters of camera. The calibration plate and its three installation location were designed, on the basis of the requirements of on-site installation position and the key point regional distribution of the light-section curves. The extraction of image feature points and the calibration of camera parameters were realized. The experimental results show that the calibration precision of Tsai non-coplanar calibration algorithm was less than 0.1 mm. The calibration precision meets the demands of wheel set online detection.%基于光截图像测量技术的轮对在线检测中，面阵 CCD 摄像机拍摄的轮对光截曲线畸变严重，摄像机标定方法和标定精度直接影响轮对检测精度。本文根据非线性畸变成像模型，提出了一种基于移动平面标定板的 Tsai异面标定算法。根据径向约束条件和一阶径向畸变公式，利用最小二乘法和Levenberg-Marquardt优化算法标定出摄像机内、外部参数。并根据现场安装位置要求和光截曲线关键点区域分布情况，设计了标定板及其三个安装位置，实现了图像特征点提取和摄像机参数的标定。实验结果表明，Tsai异面标定算法标定精度在0.1 mm以内，满足轮对在线检测对标定精度的要求。
Genuer, Robin; Poggi, Jean-Michel; Tuleau-Malot, Christine; Villa-Vialaneix, Nathalie
2017-01-01
Big Data is one of the major challenges of statistical science and has numerous consequences from algorithmic and theoretical viewpoints. Big Data always involve massive data but they also often include online data and data heterogeneity. Recently some statistical methods have been adapted to process Big Data, like linear regression models, clustering methods and bootstrapping schemes. Based on decision trees combined with aggregation and bootstrap ideas, random forests were introduced by Bre...
Espie, Colin A; Kyle, Simon D; Williams, Chris; Ong, Jason C; Douglas, Neil J; Hames, Peter; Brown, June S L
2012-06-01
The internet provides a pervasive milieu for healthcare delivery. The purpose of this study was to determine the effectiveness of a novel web-based cognitive behavioral therapy (CBT) course delivered by an automated virtual therapist, when compared with a credible placebo; an approach required because web products may be intrinsically engaging, and vulnerable to placebo response. Randomized, placebo-controlled trial comprising 3 arms: CBT, imagery relief therapy (IRT: placebo), treatment as usual (TAU). Online community of participants in the UK. One hundred sixty-four adults (120 F: [mean age 49y (18-78y)] meeting proposed DSM-5 criteria for Insomnia Disorder, randomly assigned to CBT (n = 55; 40 F), IRT placebo (n = 55; 42 F) or TAU (n = 54; 38 F). CBT and IRT each comprised 6 online sessions delivered by an animated personal therapist, with automated web and email support. Participants also had access to a video library/back catalogue of session content and Wikipedia style articles. Online CBT users had access to a moderated social network/community of users. TAU comprised no restrictions on usual care and access to an online sleep diary. Major assessments at baseline, post-treatment, and at follow-up 8-weeks post-treatment; outcomes appraised by online sleep diaries and clinical status. On the primary endpoint of sleep efficiency (SE; total time asleep expressed as a percentage of the total time spent in bed), online CBT was associated with sustained improvement at post-treatment (+20%) relative to both TAU (+6%; d = 0.95) and IRT (+6%: d = 1.06), and at 8 weeks (+20%) relative to IRT (+7%: d = 1.00) and TAU (+9%: d = 0.69) These findings were mirrored across a range of sleep diary measures. Clinical benefits of CBT were evidenced by modest superiority over placebo on daytime outcomes (d = 0.23-0.37) and by substantial improved sleep-wake functioning on the Sleep Condition Indicator (range of d = 0.77-1.20). Three-quarters of CBT participants (76% [CBT] vs. 29
van der Zweerde, Tanja; Lancee, Jaap; Slottje, Pauline; Bosmans, Judith; Van Someren, Eus; Reynolds, Charles; Cuijpers, Pim; van Straten, Annemieke
2016-04-02
Insomnia is a highly prevalent disorder causing clinically significant distress and impairment. Furthermore, insomnia is associated with high societal and individual costs. Although cognitive behavioural treatment for insomnia (CBT-I) is the preferred treatment, it is not used often. Offering CBT-I in an online format may increase access. Many studies have shown that online CBT for insomnia is effective. However, these studies have all been performed in general population samples recruited through media. This protocol article presents the design of a study aimed at establishing feasibility, effectiveness and cost-effectiveness of a guided online intervention (i-Sleep) for patients suffering from insomnia that seek help from their general practitioner as compared to care-as-usual. In a pragmatic randomized controlled trial, adult patients with insomnia disorder recruited through general practices are randomized to a 5-session guided online treatment, which is called "i-Sleep", or to care-as-usual. Patients in the care-as-usual condition will be offered i-Sleep 6 months after inclusion. An ancillary clinician, known as the psychological well-being practitioner who works in the GP practice (PWP; in Dutch: POH-GGZ), will offer online support after every session. Our aim is to recruit one hundred and sixty patients. Questionnaires, a sleep diary and wrist actigraphy will be administered at baseline, post intervention (at 8 weeks), and at 6 months and 12 months follow-up. Effectiveness will be established using insomnia severity as the main outcome. Cost-effectiveness and cost-utility (using costs per quality adjusted life year (QALY) as outcome) will be conducted from a societal perspective. Secondary measures are: sleep diary, daytime consequences, fatigue, work and social adjustment, anxiety, alcohol use, depression and quality of life. The results of this trial will help establish whether online CBT-I is (cost-) effective and feasible in general practice as compared
Jin, Curtis; Michielssen, Eric; Rand, Stephen
2014-01-01
Recent theoretical and experimental advances have shed light on the existence of so-called `perfectly transmitting' wavefronts with transmission coefficients close to 1 in strongly backscattering random media. These perfectly transmitting eigen-wavefronts can be synthesized by spatial amplitude and phase modulation. Here, we consider the problem of transmission enhancement using phase-only modulated wavefronts. We develop physically realizable iterative and non-iterative algorithms for increasing the transmission through such random media using backscatter analysis. We theoretically show that, despite the phase-only modulation constraint, the non-iterative algorithms will achieve at least about 25$\\pi$% or about 78.5% transmission assuming there is at least one perfectly transmitting eigen-wavefront and that the singular vectors of the transmission matrix obey a maximum entropy principle so that they are isotropically random. We numerically analyze the limits of phase-only modulated transmission in 2-D with f...
Design and implementation of power online analysis and measurement algorithm%电能在线计量分析算法设计与实现
杨福刚
2011-01-01
Adopting TMS320F2812 DSP as core processor and combining with IQmath library provided by TI, this paper presents a practical algorithm to accurately meter and analyze power paramenters online. First, hardware phase-locked loop technology is used to achieve synchronous sampling control of the power network signal. Then, the FIR low-pass digital anti-aliasing filter is designed to filter out noise, harmonics and other high-order interference signal. Finally, the modified FFT algorithm is used in the separation of power grid voltage, current fundamental and the high-order harmonic components to carry out high-precision measurement and analysis of the relevant energy parameters. In the process of the algorithm implementation, a Q format calibration method is used to accomplish high-speed floating-point operations in the fixed-point processor, and the use of properties of real DFT can greatly reduce computational complexity and improve computing speed. The testing results show the proposed algorithm can achieve high accuracy in online power analysis and measurement.This work is supported by National Natural Science Foundation of China (No. 60673153 and No. 60970105).%以TMS320F2812 DSP为核心处理器,结合TI公司提供的IQmath函数库,提出了一种对电力参数进行在线高精度计量分析的实用算法.采用硬件锁相环技术实现对电网信号的同步采样控制,设计了FIR低通数字抗混叠滤波器滤除高次噪声及谐波等干扰信号,并采用简化的FFT算法分离电网中的电压、电流基波和各高次谐波分量以进行相关电能参数的高精度计量和分析.算法在实现过程中,利用Q格式定标法解决了在定点处理器上进行高速浮点运算问题,利用实数DFT的性质,大大减少了运算量,提高了计算速度.试验测试结果表明该算法能够对电能进行高精度在线计量分析.
Jia, J H; Liu, Z; Chen, X; Xiao, X; Liu, B X
2015-10-02
Studying the network of protein-protein interactions (PPIs) will provide valuable insights into the inner workings of cells. It is vitally important to develop an automated, high-throughput tool that efficiently predicts protein-protein interactions. This study proposes a new model for PPI prediction based on the concept of chaos game representation and the wavelet transform, which means that a considerable amount of sequence-order effects can be incorporated into a set of discrete numbers. The advantage of using chaos game representation and the wavelet transform to formulate the protein sequence is that it can more effectively reflect its overall sequence-order characteristics than the conventional correlation factors. Using such a formulation frame to represent the protein sequences means that the random forest algorithm can be used to conduct the prediction. The results for a large-scale independent test dataset show that the proposed model can achieve an excellent performance with an accuracy value of about 0.86 and a geometry mean value of about 0.85. The model is therefore a useful supplementary tool for PPI predictions. The predictor used in this article is freely available at http://www.jci-bioinfo.cn/PPI.
Odindi, John; Adam, Elhadi; Ngubane, Zinhle; Mutanga, Onisimo; Slotow, Rob
2014-01-01
Plant species invasion is known to be a major threat to socioeconomic and ecological systems. Due to high cost and limited extents of urban green spaces, high mapping accuracy is necessary to optimize the management of such spaces. We compare the performance of the new-generation WorldView-2 (WV-2) and SPOT-5 images in mapping the bracken fern [Pteridium aquilinum (L) kuhn] in a conserved urban landscape. Using the random forest algorithm, grid-search approaches based on out-of-bag estimate error were used to determine the optimal ntree and mtry combinations. The variable importance and backward feature elimination techniques were further used to determine the influence of the image bands on mapping accuracy. Additionally, the value of the commonly used vegetation indices in enhancing the classification accuracy was tested on the better performing image data. Results show that the performance of the new WV-2 bands was better than that of the traditional bands. Overall classification accuracies of 84.72 and 72.22% were achieved for the WV-2 and SPOT images, respectively. Use of selected indices from the WV-2 bands increased the overall classification accuracy to 91.67%. The findings in this study show the suitability of the new generation in mapping the bracken fern within the often vulnerable urban natural vegetation cover types.
The research of the disjunction of random forest algorithm%析取随机森林算法研究
王欣欣; 王展青
2016-01-01
In this paper,by introducing the concept of additive-forward model disjunction of random forest algorithm is put forward,This method improved the decision tree learning method ,it introduces the concept of a global loss function,thus increasing the individual decision tree nodes under the influence of the connection between the each node in the classification of the situation.The improved model can achieve reduce training time and make the training get faster convergence speed,the purpose of the forecast results more accurate.%本文通过引进前向反馈模型的概念提出析取-随机森林算法,该方法将析取随机森林模型中决策树的学习方法进行改进,引入全局损失函数的概念,从而增加单个决策树每个节点之间的联系以影响下个节点的分类情况.改进后的模型可以达到减少训练时间和使最后训练得到的随机森林收敛速度更快、预测结果更为准确的目的.
Shukla, Gaurav; Garg, Rahul Dev; Srivastava, Hari Shanker; Garg, Pradeep Kumar
2017-04-01
The purpose of this study is to effectively implement random forest algorithm for crop classification of large areas and to check the classification capability of different variables. To incorporate dependency of crops in different variables namely, texture, phenological, parent material and soil, soil moisture, topographic, vegetation, and climate, 35 digital layers are prepared using different satellite data (ALOS DEM, Landsat-8, MODIS NDVI, RISAT-1, and Sentinel-1A) and climatic data (precipitation and temperature). The importance of variables is also calculated based on mean decrease in accuracy and mean decrease in Gini score. Importance and capabilities of variables for crop mapping have been discussed. Variables associated with spectral responses have shown greater importance in comparison to topographic and climate variables. The spectral range (0.85 to 0.88 μm) of the near-infrared band is the most useful variable with the highest scores. The topographic variable and elevation have secured the second place rank in the both scores. This indicates the importance of spectral responses as well as of topography in model development. Climate variables have not shown as much importance as others, but in association with others, they cause a decrease in the out of bag (OOB) error rate. In addition to the OOB data, a 20% independent dataset of training samples is used to evaluate RF model. Results show that RF has good capability for crop classification.
Fabian eGander; René T. Proyer; Willibald eRuch
2016-01-01
Objective: Seligman (2002) suggested three paths to well-being, the pursuit of pleasure, the pursuit of meaning, and the pursuit of engagement, later adding two more, positive relationships and accomplishment, in his 2011 version. The contribution of these new components to well-being has yet to be addressed.Method: In an online positive psychology intervention study, we randomly assigned 1,624 adults aged 18 to 78 (M = 46.13; 79.2% women) to seven conditions. Participants wrote down three th...
Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing
2016-01-01
An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.
Liu, Chunfang; Hayakawa, Yoshikazu; Nakashima, Akira
This paper proposes an on-line method for estimating both translational and rotational velocities of a table tennis ball by using only a few consecutive frames of image data which are sensed by two high speed cameras. In order to estimate the translational velocity, three-dimensional (3D) position of the ball's center at each instant of camera frame is obtained, where the on-line method of reconstructing the 3D position from the two-dimensional (2D) image data of two cameras is proposed without the pattern matching process. The proposed method of estimating the rotational velocity belongs to the image registration methods, where in order to avoid the pattern matching process too, a rotation model of the ball is used to make an estimated image data from an image data sensed at the previous instant of camera frame and then the estimated image data are compared with the image data sensed at the next instant of camera frame to obtain the most plausible rotational velocity by using the least square and the conjugate gradient method. The effectiveness of the proposed method is shown by some experimental results in the case of a ball rotated by a rotation machine as well as in the case of a flying ball shot from a catapult machine.
Raju Barskar; Anjana Jayant Deen; yoti Bharti; Gulfishan Firdose Ahmed
2010-01-01
E-Commerce offers the banking industry great opportunity, but also creates a set of new risks and vulnerability such as security threats. Information security, therefore, is an essential management and technical requirement for any efficient and effective Payment transaction activities over the internet. Still, its definition is a complex endeavor due to the constant technological and business change and requires a coordinated match of algorithm and technical solutions. Ecommerce is not appro...
Obstacle detection algorithm online based on Kinect depth technique%基于Kinect深度技术的障碍物在线快速检测算法
朱涛; 芦利斌; 金国栋
2014-01-01
In view of real-time obstacle detection problems facing the mobile robot autonomous navigation in the unknown and alien environment, this paper proposes a method of an on-line rapid obstacle detection algorithm based on Kinect depth technique. Based on the depth camera calibration, this paper analyzes scene changes in video streams when camera moves and focuses on studying characteristics of Kinect depth image and on-line rapid obstacle detections under dynamic background indoors. By building indoor dynamic background models, using the method of background subtraction and the analysis of connected sand body to extract and classify obstacles, on-line rapid detection of video image sequences come true. With the platform of wheeled mobile robot, the accuracy, robustness and real-time performance of proposed algorithm is tested.%针对在未知环境中移动机器人自主导航面临的障碍物实时检测问题，提出了一种基于Kinect深度技术的障碍物在线快速检测算法。在深度摄像机标定基础上，分析了摄像机运动造成视频流中的场景变化，重点研究了室内动态背景下的Kinect深度图像特征和障碍物在线快速检测。建立室内动态背景模型，采用背景减除法和连通体分析提取障碍物并归类，实现了对Kinect视频序列图像的在线快速检测。以轮式移动机器人为实验平台，验证了所提出算法的实时性、准确性和鲁棒性。
Lancee, J.; Eisma, M.C.; van Straten, A.; Kamphuis, J.H.
2015-01-01
Several trials have demonstrated the efficacy of online cognitive behavioral therapy (CBT) for insomnia. However, few studies have examined putative mechanisms of change based on the cognitive model of insomnia. Identification of modifiable mechanisms by which the treatment works may guide efforts t
Lancee, Jaap; Eisma, Maarten C.; van Straten, Annemieke; Kamphuis, Jan H.
2015-01-01
Several trials have demonstrated the efficacy of online cognitive behavioral therapy (CBT) for insomnia. However, few studies have examined putative mechanisms of change based on the cognitive model of insomnia. Identification of modifiable mechanisms by which the treatment works may guide efforts t
Lancee, Jaap; Eisma, Maarten C.; van Straten, Annemieke; Kamphuis, Jan H.
2015-01-01
Several trials have demonstrated the efficacy of online cognitive behavioral therapy (CBT) for insomnia. However, few studies have examined putative mechanisms of change based on the cognitive model of insomnia. Identification of modifiable mechanisms by which the treatment works may guide efforts t
Lancee, J.; Eisma, M.C.; van Straten, A.; Kamphuis, J.H.
2015-01-01
Several trials have demonstrated the efficacy of online cognitive behavioral therapy (CBT) for insomnia. However, few studies have examined putative mechanisms of change based on the cognitive model of insomnia. Identification of modifiable mechanisms by which the treatment works may guide efforts t
Meghanathan, Natarajan; Isokpehi, Raphael; Cohly, Hari; 10.5121/ijcsit.2010.2307
2010-01-01
The high-level contribution of this paper is the development and implementation of an algorithm to selfextract secondary keywords and their combinations (combo words) based on abstracts collected using standard primary keywords for research areas from reputed online digital libraries like IEEE Explore, PubMed Central and etc. Given a collection of N abstracts, we arbitrarily select M abstracts (M<< N; M/N as low as 0.15) and parse each of the M abstracts, word by word. Upon the first-time appearance of a word, we query the user for classifying the word into an Accept-List or non-Accept-List. The effectiveness of the training approach is evaluated by measuring the percentage of words for which the user is queried for classification when the algorithm parses through the words of each of the M abstracts. We observed that as M grows larger, the percentage of words for which the user is queried for classification reduces drastically. After the list of acceptable words is built by parsing the M abstracts, we ...
分形拓扑变幻的随机密钥生成算法研究%Random Key Generation of Fractal Topology Changes Algorithm
何增颖
2013-01-01
针对现有随机密钥生成方法随机性低与运算复杂的缺点,引入一种拓扑群对象的分形变幻运算思维进行改进,并构造新的简单高效的分形变幻环运算机制,在此基础上提出了一种简单高效的拓扑群分形变幻随机密钥生成算法.该算法首先将图像数据进行集合划分,将划分的子集进行散列运算之后作为随机密钥生成的输入,然后进行分形变幻环运算,得到环运算后的子集合点的坐标值,最后输出整个伪随机序列.实验结果表明,该算法高效可行,随机性强,算法的时间复杂度低.%To avoid the defect of low randomness and computational complexity of the existing random key generation method,the paper presented a topological group fractal changing random key generation algorithm.First,the image data were set partition,and hash algorithm was carried out with the subsets to generate a random key input.Then,the fractal loop changing operation was proceeded to obtain the value of the subset of the coordinates of points in the ring after the operation and to output the entire pseudo-random sequence.The experimental results show that the algorithm is efficient and the algorithm of time complexity is low.
Lovecchio, Catherine P; Wyatt, Todd M; DeJong, William
2010-10-01
A randomized control trial was conducted at a midsized private university in the Northeast to evaluate the short-term impact of AlcoholEdu for College 8.0, an online alcohol course for first-year students. In September 2007, 1,620 matriculated first-year students were randomly assigned to either a treatment group or an assessment-only control group. Both groups of students completed a baseline survey and knowledge test. Treatment group students finished the course, took a second knowledge test, and 30 days later completed a postintervention survey. Control group students completed the postintervention survey and knowledge test during the same time period. Compared with the control group, treatment group students reported a significantly lower level of alcohol use, fewer negative drinking consequences, and less positive alcohol-related attitudes. AlcoholEdu 8.0 had a positive impact on the first-year students' alcohol-related attitudes, behaviors, and consequences. Additional investigations of online alcohol education courses are warranted.
You-Ten, Kong Eric; Bould, M Dylan; Friedman, Zeev; Riem, Nicole; Sydor, Devin; Boet, Sylvain
2015-05-01
Non-adherence to airway guidelines in a 'cannot intubate-cannot oxygenate' (CICO) crisis situation is associated with adverse patient outcomes. This study investigated the effects of hands-on training in cricothyrotomy on adherence to the American Society of Anesthesiologists difficult airway algorithm (ASA-DAA) during a simulated CICO scenario. A total of 21 postgraduate second-year anesthesia residents completed a pre-test teaching session during which they reviewed the ASA-DAA, became familiarized with the Melker cricothyrotomy kit, and watched a video on cricothyrotomy. Participants were randomized to either the intervention 'Trained' group (n = 10) (taught and practiced cricothyrotomy) or the control 'Non-Trained' group (n = 11) (no extra training). After two to three weeks, performances of the groups were assessed in a simulated CICO scenario. The primary outcome measure was major deviation from the ASA-DAA. Secondary outcome measures were (1) performance of the four categories of non-technical behaviours using the validated Anaesthetists' Non-Technical Skills scale (ANTS) and (2) time to perform specific tasks. Significantly more non-trained than trained participants (6/11 vs 0/10, P = 0.012) committed at least one major ASA-DAA deviation, including failure to insert an oral airway, failure to call for help, bypassing the laryngeal mask airway, and attempting fibreoptic intubation. ANTS scores for all four categories of behaviours, however, were similar between the groups. Trained participants called for help faster [26 (2) vs 63 (48) sec, P = 0.012] but delayed opening of the cricothyrotomy kit [130 (50) vs 74 (36) sec, P = 0.014]. Hands-on training in cricothyrotomy resulted in fewer major ASA-DAA deviations in a simulated CICO scenario. Training in cricothyrotomy may play an important role in complying with the ASA-DAA in a CICO situation but does not appear to affect non-technical behaviours such as decision-making.
Baez, Marcos; Khaghani Far, Iman; Ibarra, Francisco; Ferron, Michela; Didino, Daniele; Casati, Fabio
2017-01-01
Intervention programs to promote physical activity in older adults, either in group or home settings, have shown equivalent health outcomes but different results when considering adherence. Group-based interventions seem to achieve higher participation in the long-term. However, there are many factors that can make of group exercises a challenging setting for older adults. A major one, due to the heterogeneity of this particular population, is the difference in the level of skills. In this paper we report on the physical, psychological and social wellbeing outcomes of a technology-based intervention that enable online group exercises in older adults with different levels of skills. A total of 37 older adults between 65 and 87 years old followed a personalized exercise program based on the OTAGO program for fall prevention, for a period of eight weeks. Participants could join online group exercises using a tablet-based application. Participants were assigned either to the Control group, representing the traditional individual home-based training program, or the Social group, representing the online group exercising. Pre- and post- measurements were taken to analyze the physical, psychological and social wellbeing outcomes. After the eight-weeks training program there were improvements in both the Social and Control groups in terms of physical outcomes, given the high level of adherence of both groups. Considering the baseline measures, however, the results suggest that while in the Control group fitter individuals tended to adhere more to the training, this was not the case for the Social group, where the initial level had no effect on adherence. For psychological outcomes there were improvements on both groups, regardless of the application used. There was no significant difference between groups in social wellbeing outcomes, both groups seeing a decrease in loneliness despite the presence of social features in the Social group. However, online social interactions
Khaghani Far, Iman; Ibarra, Francisco; Ferron, Michela; Didino, Daniele; Casati, Fabio
2017-01-01
Background Intervention programs to promote physical activity in older adults, either in group or home settings, have shown equivalent health outcomes but different results when considering adherence. Group-based interventions seem to achieve higher participation in the long-term. However, there are many factors that can make of group exercises a challenging setting for older adults. A major one, due to the heterogeneity of this particular population, is the difference in the level of skills. In this paper we report on the physical, psychological and social wellbeing outcomes of a technology-based intervention that enable online group exercises in older adults with different levels of skills. Methods A total of 37 older adults between 65 and 87 years old followed a personalized exercise program based on the OTAGO program for fall prevention, for a period of eight weeks. Participants could join online group exercises using a tablet-based application. Participants were assigned either to the Control group, representing the traditional individual home-based training program, or the Social group, representing the online group exercising. Pre- and post- measurements were taken to analyze the physical, psychological and social wellbeing outcomes. Results After the eight-weeks training program there were improvements in both the Social and Control groups in terms of physical outcomes, given the high level of adherence of both groups. Considering the baseline measures, however, the results suggest that while in the Control group fitter individuals tended to adhere more to the training, this was not the case for the Social group, where the initial level had no effect on adherence. For psychological outcomes there were improvements on both groups, regardless of the application used. There was no significant difference between groups in social wellbeing outcomes, both groups seeing a decrease in loneliness despite the presence of social features in the Social group. However
Neelke C van der Weerd
Full Text Available Resistance to erythropoiesis stimulating agents (ESA is common in patients undergoing chronic hemodialysis (HD treatment. ESA responsiveness might be improved by enhanced clearance of uremic toxins of middle molecular weight, as can be obtained by hemodiafiltration (HDF. In this analysis of the randomized controlled CONvective TRAnsport STudy (CONTRAST; NCT00205556, the effect of online HDF on ESA resistance and iron parameters was studied. This was a pre-specified secondary endpoint of the main trial. A 12 months' analysis of 714 patients randomized to either treatment with online post-dilution HDF or continuation of low-flux HD was performed. Both groups were treated with ultrapure dialysis fluids. ESA resistance, measured every three months, was expressed as the ESA index (weight adjusted weekly ESA dose in daily defined doses [DDD]/hematocrit. The mean ESA index during 12 months was not different between patients treated with HDF or HD (mean difference HDF versus HD over time 0.029 DDD/kg/Hct/week [-0.024 to 0.081]; P = 0.29. Mean transferrin saturation ratio and ferritin levels during the study tended to be lower in patients treated with HDF (-2.52% [-4.72 to -0.31]; P = 0.02 and -49 ng/mL [-103 to 4]; P = 0.06 respectively, although there was a trend for those patients to receive slightly more iron supplementation (7.1 mg/week [-0.4 to 14.5]; P = 0.06. In conclusion, compared to low-flux HD with ultrapure dialysis fluid, treatment with online HDF did not result in a decrease in ESA resistance.ClinicalTrials.gov NCT00205556.
Preschl Barbara
2011-12-01
Full Text Available Abstract Background Although numerous efficacy studies in recent years have found internet-based interventions for depression to be effective, there has been scant consideration of therapeutic process factors in the online setting. In face-to face therapy, the quality of the working alliance explains variance in treatment outcome. However, little is yet known about the impact of the working alliance in internet-based interventions, particularly as compared with face-to-face therapy. Methods This study explored the working alliance between client and therapist in the middle and at the end of a cognitive-behavioral intervention for depression. The participants were randomized to an internet-based treatment group (n = 25 or face-to-face group (n = 28. Both groups received the same cognitive behavioral therapy over an 8-week timeframe. Participants completed the Beck Depression Inventory (BDI post-treatment and the Working Alliance Inventory at mid- and post- treatment. Therapists completed the therapist version of the Working Alliance Inventory at post-treatment. Results With the exception of therapists' ratings of the tasks subscale, which were significantly higher in the online group, the two groups' ratings of the working alliance did not differ significantly. Further, significant correlations were found between clients' ratings of the working alliance and therapy outcome at post-treatment in the online group and at both mid- and post-treatment in the face-to-face group. Correlation analysis revealed that the working alliance ratings did not significantly predict the BDI residual gain score in either group. Conclusions Contrary to what might have been expected, the working alliance in the online group was comparable to that in the face-to-face group. However, the results showed no significant relations between the BDI residual gain score and the working alliance ratings in either group. Trial registration ACTRN12611000563965
Preschl, Barbara; Maercker, Andreas; Wagner, Birgit
2011-12-06
Although numerous efficacy studies in recent years have found internet-based interventions for depression to be effective, there has been scant consideration of therapeutic process factors in the online setting. In face-to face therapy, the quality of the working alliance explains variance in treatment outcome. However, little is yet known about the impact of the working alliance in internet-based interventions, particularly as compared with face-to-face therapy. This study explored the working alliance between client and therapist in the middle and at the end of a cognitive-behavioral intervention for depression. The participants were randomized to an internet-based treatment group (n = 25) or face-to-face group (n = 28). Both groups received the same cognitive behavioral therapy over an 8-week timeframe. Participants completed the Beck Depression Inventory (BDI) post-treatment and the Working Alliance Inventory at mid- and post- treatment. Therapists completed the therapist version of the Working Alliance Inventory at post-treatment. With the exception of therapists' ratings of the tasks subscale, which were significantly higher in the online group, the two groups' ratings of the working alliance did not differ significantly. Further, significant correlations were found between clients' ratings of the working alliance and therapy outcome at post-treatment in the online group and at both mid- and post-treatment in the face-to-face group. Correlation analysis revealed that the working alliance ratings did not significantly predict the BDI residual gain score in either group. Contrary to what might have been expected, the working alliance in the online group was comparable to that in the face-to-face group. However, the results showed no significant relations between the BDI residual gain score and the working alliance ratings in either group. ACTRN12611000563965. © 2011 Preschl et al; licensee BioMed Central Ltd.
面向大数据分析的在线学习算法综述%Online Learning Algorithms for Big Data Analytics:A Survey
李志杰; 李元香; 王峰; 何国良; 匡立
2015-01-01
大数据时代，越来越多的领域出现了对海量、高速数据进行实时处理的需求。如何对大数据流进行抽取转化成有用的信息并应用于各行各业变得越来越重要。传统的批量机器学习技术在大数据分析的应用中存在许多限制。在线学习技术采用流式计算模式，在内存中直接进行数据的实时计算，为流数据的学习提供了有利的工具。介绍了大数据分析的动机与背景，集中展示经典和最新的在线学习方法与算法，这种在线学习体系很有希望解决各种大数据挖掘任务面临的困难与挑战。主要技术内容包括3方面：1）线性模型在线学习；2）基于核的非线性模型在线学习；3）非传统的在线学习方法。各类方法尽量给出详细的模型和伪代码，讨论面向大数据分析的大规模机器学习研究与应用中的关键问题；给出大数据在线学习的3种典型应用场景，并探讨现今或将来在线学习领域进一步的研究方向。%The advent of big data has been presenting a large array of applications that require real‐time processing of massive data with high velocity .How to mine big data stream in a wide range of real‐world applications becomes more and more important . Conventional batch machine learning techniques suffer from many limitations when being applied to big data analytics tasks . Online learning technique with stream computing mode is a promising tool for data stream learning .In this survey ,we firstly introduce the motivation and background of big data analytics ,and then focus on presenting the family of classical and latest online learning methods and algorithms , w hich are promising to tackle the emerging challenges of mining big data in a wide range of real‐world applications .The main technical content of this survey consists of three parts :1) online learning for linear model;2) kernel‐based online learning for nonlinear model ;3
无
2007-01-01
Dijkstra algorithm is a basic algorithm to analyze the vehicle routing problem (VRP) in the terminal distribution of logistics center. According to the actual client demands of service speed and quality, the conceptions of economical distance of delivery and the best routing algorithm were given on the base of the Dijkstra algorithm with consideration of a coefficient of the road hustle degree. Economical distance of delivery is the shortest physical distance between two customers. It is the value of goods delivery in shortest distance when concerning factors such as the road length, the hustle degree, the driveway quantity, and the type of the road. The improved algorithm is being used in the development and application of a distribution path information system in the terminal distribution of logistics center. The simulation and practical case prove that the algorithm is effective and reasonable.
Recommendation in evolving online networks
Hu, Xiao; Zeng, An; Shang, Ming-Sheng
2016-02-01
Recommender system is an effective tool to find the most relevant information for online users. By analyzing the historical selection records of users, recommender system predicts the most likely future links in the user-item network and accordingly constructs a personalized recommendation list for each user. So far, the recommendation process is mostly investigated in static user-item networks. In this paper, we propose a model which allows us to examine the performance of the state-of-the-art recommendation algorithms in evolving networks. We find that the recommendation accuracy in general decreases with time if the evolution of the online network fully depends on the recommendation. Interestingly, some randomness in users' choice can significantly improve the long-term accuracy of the recommendation algorithm. When a hybrid recommendation algorithm is applied, we find that the optimal parameter gradually shifts towards the diversity-favoring recommendation algorithm, indicating that recommendation diversity is essential to keep a high long-term recommendation accuracy. Finally, we confirm our conclusions by studying the recommendation on networks with the real evolution data.
Barskar, Raju; Bharti, Jyoti; Ahmed, Gulfishan Firdose
2010-01-01
E-Commerce offers the banking industry great opportunity, but also creates a set of new risks and vulnerability such as security threats. Information security, therefore, is an essential management and technical requirement for any efficient and effective Payment transaction activities over the internet. Still, its definition is a complex endeavor due to the constant technological and business change and requires a coordinated match of algorithm and technical solutions. Ecommerce is not appropriate to all business transactions and, within e-commerce there is no one technology that can or should be appropriate to all requirements. E-commerce is not a new phenomenon; electronic markets, electronic data interchange and customer e-commerce. The use of electronic data interchanges as a universal and non-proprietary way of doing business. Through the electronic transaction the security is the most important phenomena to enhance the banking transaction security via payment transaction.
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
Bayesian online compressed sensing
Rossi, Paulo V.; Kabashima, Yoshiyuki; Inoue, Jun-ichi
2016-08-01
In this paper, we explore the possibilities and limitations of recovering sparse signals in an online fashion. Employing a mean field approximation to the Bayes recursion formula yields an online signal recovery algorithm that can be performed with a computational cost that is linearly proportional to the signal length per update. Analysis of the resulting algorithm indicates that the online algorithm asymptotically saturates the optimal performance limit achieved by the offline method in the presence of Gaussian measurement noise, while differences in the allowable computational costs may result in fundamental gaps of the achievable performance in the absence of noise.
De Voogd, E L; Wiers, R W; Salemink, E
2017-05-01
Anxiety and depression, which are highly prevalent in adolescence, are both characterized by a negative attentional bias. As Attentional Bias Modification (ABM) can reduce such a bias, and might also affect emotional reactivity, it could be a promising early intervention. However, a growing number of studies also report comparable improvements in both active and placebo groups. The current study investigated the effects of eight online sessions of visual search (VS) ABM compared to both a VS placebo-training and a no-training control group in adolescents with heightened symptoms of anxiety and/or depression (n = 108). Attention bias, interpretation bias, and stress-reactivity were assessed pre- and post-training. Primary outcomes of anxiety and depressive symptoms, and secondary measures of emotional resilience were assessed pre- and post-training and at three and six months follow-up. Results revealed that VS training reduced attentional bias compared to both control groups, with stronger effects for participants who completed more training sessions. Irrespective of training condition, an overall reduction in symptoms of anxiety and depression and an increase in emotional resilience were observed up to six months later. The training was evaluated relatively negatively. Results suggest that online ABM as employed in the current study has no added value as an early intervention in adolescents with heightened symptoms.
Paszkowicz, Wojciech [Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, PL-02-668 Warsaw (Poland)]. E-mail: paszk@ifpan.edu.pl
2006-04-27
Genetic algorithms represent a powerful global-optimisation tool applicable in solving tasks of high complexity in science, technology, medicine, communication, etc. The usual genetic-algorithm calculation scheme is extended here by introduction of a quadratic self-learning operator, which performs a partial local search for randomly selected representatives of the population. This operator is aimed as a minor deterministic contribution to the (stochastic) genetic search. The population representing the trial solutions is split into two equal subpopulations allowed to exhibit different mutation rates (so called asymmetric mutation). The convergence is studied in detail exploiting a crystallographic-test example of indexing of powder diffraction data of orthorhombic lithium copper oxide, varying such parameters as mutation rates and the learning rate. It is shown through the averaged (over the subpopulation) fitness behaviour, how the genetic diversity in the population depends on the mutation rate of the given subpopulation. Conditions and algorithm parameter values favourable for convergence in the framework of proposed approach are discussed using the results for the mentioned example. Further data are studied with a somewhat modified algorithm using periodically varying mutation rates and a problem-specific operator. The chance of finding the global optimum and the convergence speed are observed to be strongly influenced by the effective mutation level and on the self-learning level. The optimal values of these two parameters are about 6 and 5%, respectively. The periodic changes of mutation rate are found to improve the explorative abilities of the algorithm. The results of the study confirm that the applied methodology leads to improvement of the classical genetic algorithm and, therefore, it is expected to be helpful in constructing of algorithms permitting to solve similar tasks of higher complexity.
随机环境下网上零售商订单配置优化%Optimization of order allocation of online retailing in a random environment
吴限; 陈淮莉
2013-01-01
针对网购环境下随机性和延期性影响订单履约的问题。在(Q,R)库存模型的基础上,加入代发履约订单的选项,假设客户需求和订单提前期不确定,客户分为高级和低级两个等级,在确保高级客户的优先权的基础上,设置了一种订单配置策略。采用蒙特卡洛模拟方法,计算特定条件下利润最大化阀值,并对不同客户的订单需求变化对利润最大化阀值的影响进行了灵敏度分析。结果表明：此方法提高了利润和客户服务水平。随机性因素的变动对利润最大化阀值有显著影响。%Aiming at ensuring the customer order fulfillment rate under a stochastic and delayed online shopping environment, a threshold is set on the (Q, R) inventory model, which can maximize the expected profit, and add a drop-shipping option. When a demand process is stochastic and a lead-time is stochastic, customers are divided into two grades, and online retailers can use "drop-shipping" as an order fulfillment option. The Monte Carlo simulation method is used to calculate the maximum profit threshold, and gives a sensitivity analysis for the different demands between different customers. The results indicate the threshold strategy improves the online retailer’s customer service level and expected profit. In addition, the random factors have significant influence on the profit maximization threshold.
Gotink, Rinske A; Younge, John O; Wery, Machteld F; Utens, Elisabeth M W J; Michels, Michelle; Rizopoulos, Dimitris; van Rossum, Liesbeth F C; Roos-Hesselink, Jolien W; Hunink, Myriam M G
2017-01-01
There is increasing evidence that mindfulness can reduce stress, and thereby affect other psychological and physiological outcomes as well. Earlier, we reported the direct 3-month results of an online modified mindfulness-based stress reduction training in patients with heart disease, and now we evaluate the effect at 12-month follow-up. 324 patients (mean age 43.2 years, 53.7% male) were randomized in a 2:1 ratio to additional 3-month online mindfulness training or to usual care alone. The primary outcome was exercise capacity measured with the 6 minute walk test (6MWT). Secondary outcomes were blood pressure, heart rate, respiratory rate, NT-proBNP, cortisol levels (scalp hair sample), mental and physical functioning (SF-36), anxiety and depression (HADS), perceived stress (PSS), and social support (PSSS12). Differences between groups on the repeated outcome measures were analyzed with linear mixed models. At 12-months follow-up, participants showed a trend significant improvement exercise capacity (6MWT: 17.9 meters, p = 0.055) compared to UC. Cohen's D showed significant but small improvement on exercise capacity (d = 0.22; 95%CI 0.05 to 0.39), systolic blood pressure (d = 0.19; 95%CI 0.03 to 0.36), mental functioning (d = 0.22; 95%CI 0.05 to 0.38) and depressive symptomatology (d = 0.18; 95%CI 0.02 to 0.35). All other outcome measures did not change statistically significantly. In the as-treated analysis, systolic blood pressure decreased significantly with 5.5 mmHg (p = 0.045; d = 0.23 (95%CI 0.05-0.41)). Online mindfulness training shows favorable albeit small long-term effects on exercise capacity, systolic blood pressure, mental functioning, and depressive symptomatology in patients with heart disease and might therefore be a beneficial addition to current clinical care. www.trialregister.nl NTR3453.
Kamchevska, Valerija; Cristofori, Valentina; Da Ros, Francesco
2016-01-01
We propose and demonstrate an algorithm that allows for automatic synchronization of SDN-controlled all-optical TDM switching nodes connected in a ring network. We experimentally show successful WDM-SDM transmission of data bursts between all ring nodes.......We propose and demonstrate an algorithm that allows for automatic synchronization of SDN-controlled all-optical TDM switching nodes connected in a ring network. We experimentally show successful WDM-SDM transmission of data bursts between all ring nodes....
Rapidly-exploring Random Tree Algorithm Based on Dynamic Step%动态步长的RRT路径规划算法
王道威; 朱明富; 刘慧
2016-01-01
Although the traditional Rapidly-exploring Random Tree ( RRT) algorithm has many good features,there is a lot of random-ness in path planning of RRT because of the random selection of the vertex. Based on the improvement of RRT algorithm,a new RRT path planning algorithm of dynamic step size is proposed in this paper. The step size is the minimum unit length when RRT exploring. Based on traditional RRT,the dynamic step size is added,avoiding the uncertainty,and the obstacle avoidance capability is improved,thus the path planning of RRT algorithm has both obstacle avoidance ability and high certainty. The results of simulation experiments show that the algorithm has the features of avoiding the uncertainty,fast speed and obstacle avoidance in path planning.%传统的快速扩展随机树( RRT)算法虽然有很多优良特性，但是由于扩展点的随机选取，规划出来的路径具有很大的随机性。文中在对RRT算法改进的基础上，提出了一种动态步长的RRT路径规划算法。其中步长为RRT生长的最小单位长度。动态步长的RRT算法是在对传统RRT算法的基础上，添加了动态步长的特性，改善了快速扩展随机树的不确定性，提高了避障能力，使得算法确定性和高避障能力兼备。仿真实验结果表明，该算法在路径规划中具有路径确定、速度快和高避障能力的特点。
Hackworth, Naomi J; Matthews, Jan; Burke, Kylie; Petrovic, Zvezdana; Klein, Britt; Northam, Elisabeth A; Kyrios, Michael; Chiechomski, Lisa; Cameron, Fergus J
2013-12-17
Management of Type 1 diabetes comes with substantial personal and psychological demands particularly during adolescence, placing young people at significant risk for mental health problems. Supportive parenting can mitigate these risks, however the challenges associated with parenting a child with a chronic illness can interfere with a parent's capacity to parent effectively. Interventions that provide support for both the adolescent and their parents are needed to prevent mental health problems in adolescents; to support positive parent-adolescent relationships; and to empower young people to better self-manage their illness. This paper presents the research protocol for a study evaluating the efficacy of the Nothing Ventured Nothing Gained online adolescent and parenting intervention which aims to improve the mental health outcomes of adolescents with Type 1 diabetes. A randomized controlled trial using repeated measures with two arms (intervention and wait-list control) will be used to evaluate the efficacy and acceptability of the online intervention. Approximately 120 adolescents with Type 1 diabetes, aged 13-18 years and one of their parents/guardians will be recruited from pediatric diabetes clinics across Victoria, Australia. Participants will be randomized to receive the intervention immediately or to wait 6 months before accessing the intervention. Adolescent, parent and family outcomes will be assessed via self-report questionnaires at three time points (baseline, 6 weeks and 6 months). The primary outcome is improved adolescent mental health (depression and anxiety). Secondary outcomes include adolescent behavioral (diabetes self-management and risk taking behavior), psychosocial (diabetes relevant quality of life, parent reported child well-being, self-efficacy, resilience, and perceived illness benefits and burdens); metabolic (HbA1c) outcomes; parent psychosocial outcomes (negative affect and fatigue, self-efficacy, and parent experience of child
Clare L Atzema
Full Text Available BACKGROUND: Emergency department discharge instructions are variably understood by patients, and in the setting of emergency department crowding, innovations are needed to counteract shortened interaction times with the physician. We evaluated the effect of viewing an online video of diagnosis-specific discharge instructions on patient comprehension and recall of instructions. METHODS: In this prospective, single-center, randomized controlled trial conducted between November 2011 and January 2012, we randomized emergency department patients who were discharged with one of 38 diagnoses to either view (after they left the emergency department a vetted online video of diagnosis-specific discharge instructions, or to usual care. Patients were subsequently contacted by telephone and asked three standardized questions about their discharge instructions; one point was awarded for each correct answer. Using an intention-to-treat analysis, differences between groups were assessed using univariate testing, and with logistic regression that accounted for clustering on managing physician. A secondary outcome measure was patient satisfaction with the videos, on a 10-point scale. RESULTS: Among 133 patients enrolled, mean age was 46.1 (s.d.D. 21.5 and 55% were female. Patients in the video group had 19% higher mean scores (2.5, s.d. 0.7 than patients in the control group (2.1, s.d. 0.8 (p=0.002. After adjustment for patient age, sex, first language, triage acuity score, and clustering, the odds of achieving a fully correct score (3 out of 3 were 3.5 (95% CI, 1.7 to 7.2 times higher in the video group, compared to the control group. Among those who viewed the videos, median rating of the videos was 10 (IQR 8 to 10. CONCLUSIONS: In this single-center trial, patients who viewed an online video of their discharge instructions scored higher on their understanding of key concepts around their diagnosis and subsequent care. Those who viewed the videos found them to
Context-Aware Online Commercial Intention Detection
Hu, Derek Hao; Shen, Dou; Sun, Jian-Tao; Yang, Qiang; Chen, Zheng
With more and more commercial activities moving onto the Internet, people tend to purchase what they need through Internet or conduct some online research before the actual transactions happen. For many Web users, their online commercial activities start from submitting a search query to search engines. Just like the common Web search queries, the queries with commercial intention are usually very short. Recognizing the queries with commercial intention against the common queries will help search engines provide proper search results and advertisements, help Web users obtain the right information they desire and help the advertisers benefit from the potential transactions. However, the intentions behind a query vary a lot for users with different background and interest. The intentions can even be different for the same user, when the query is issued in different contexts. In this paper, we present a new algorithm framework based on skip-chain conditional random field (SCCRF) for automatically classifying Web queries according to context-based online commercial intention. We analyze our algorithm performance both theoretically and empirically. Extensive experiments on several real search engine log datasets show that our algorithm can improve more than 10% on F1 score than previous algorithms on commercial intention detection.
Distributed Consensus Algorithms in Sensor Networks: Link Failures and Channel Noise
Kar, Soummya
2007-01-01
We study average consensus when, simultaneously, the topology is random (links are offline or online at random times) and the communication among sensors is corrupted by additive noise. Additive noise causes the states of the standard average consensus algorithm to diverge. To overcome this, we consider two modifications to average consensus: \\begin{inparaenum}[1)] \\item the $\\mathcal{A-ND}$ algorithm with weights decaying to zero (slowly, satisfying a persistence condition); and \\item the $\\mathcal{A-NC}$ algorithm with time invariant weights but that averages successive runs, restarted with the same initial conditions. \\end{inparaenum} To study the behavior of these two algorithms under the simultaneous random link failures and additive noise, we use controlled Markov processes and stochastic approximation results. With respect to the $\\mathcal{A-ND}$ algorithm, we show that the states reach a.s. consensus to a finite random variable, whose variance can be made arbitrarily small, and the expected value of t...
On the Value of Job Migration in Online Makespan Minimization
Albers, Susanne
2011-01-01
Makespan minimization on identical parallel machines is a classical scheduling problem. We consider the online scenario where a sequence of $n$ jobs has to be scheduled non-preemptively on $m$ machines so as to minimize the maximum completion time of any job. The best competitive ratio that can be achieved by deterministic online algorithms is in the range $[1.88,1.9201]$. Currently no randomized online algorithm with a smaller competitiveness is known, for general $m$. In this paper we explore the power of job migration, i.e.\\ an online scheduler is allowed to perform a limited number of job reassignments. Migration is a common technique used in theory and practice to balance load in parallel processing environments. As our main result we settle the performance that can be achieved by deterministic online algorithms. We develop an algorithm that is $\\alpha_m$-competitive, for any $m\\geq 2$, where $\\alpha_m$ is the solution of a certain equation. For $m=2$, $\\alpha_2 = 4/3$ and $\\lim_{m\\rightarrow \\infty} \\al...
Zernicke Kristin A
2013-02-01
Full Text Available Abstract Background Elevated stress can exacerbate cancer symptom severity, and after completion of primary cancer treatments, many individuals continue to have significant distress. Mindfulness-Based Cancer Recovery (MBCR is an 8-week group psychosocial intervention consisting of training in mindfulness meditation and yoga designed to mitigate stress, pain, and chronic illness. Efficacy research shows face-to-face (F2F MBCR programs have positive benefits for cancer patients; however barriers exist that impede participation in F2F groups. While online MBCR groups are available to the public, none have been evaluated. Primary objective: determine whether underserved patients are willing to participate in and complete an online MBCR program. Secondary objectives: determine whether online MBCR will mirror previous efficacy findings from F2F MBCR groups on patient-reported outcomes. Method/design The study includes cancer patients in Alberta, exhibiting moderate distress, who do not have access to F2F MBCR. Participants will be randomized to either online MBCR, or waiting for the next available group. An anticipated sample size of 64 participants will complete measures online pre and post treatment or waiting period. Feasibility will be tracked through monitoring numbers eligible and participating through each stage of the protocol. Discussion 47 have completed/completing the intervention. Data suggest it is possible to conduct a randomized waitlist controlled trial of online MBCR to reach underserved cancer survivors. Trial registration Clinical Trials.gov Identifier: NCT01476891
Kamalanaban Ethala
2013-01-01
Full Text Available In day to day information security infrastructure, intrusion detection is indispensible. Signature based intrusion detection system mechanisms are often available in detecting many types of attacks. But this mechanism alone is not sufficient in many cases. Another intrusion detection method viz K-means is employed for clustering and classifying the unlabelled data. IDS is a special embedded device or relied software package which process of monitoring the events occurring in a computer system or network (WLAN (Wi-Fi, Wimax and LAN ((Ethernet, FDDI, ADSL, Token ring based and analysing them for sign of possible incident which are violations or forthcoming threats of violations of computer security policies or standard security policies (i.e., DMA acts. We proposed a new methodology for detecting intrusions by means of clustering and classification algorithms. There we used correlation clustering and K-means clustering algorithm for clustering and random forest algorithm for classification. This type of extension establishes a layer which refines the escalated alerts using signature-based correlation. In this study, signature based intrusion detection system with optimised algorithm for better prediction of intrusions has been addressed. Results are presented and discussed.
Online Structure Learning Algorithm for Weighted Networks%加权网络的在线结构学习算法
蒋晓娟; 张文生
2016-01-01
With continuous development of internet technology, the scope of network datasets increases massively. Analyzing the structure of network data is a research hotspot in machine learning and network applications. In this paper, a scalable online learning algorithm is proposed to speed up the inference procedure for the latent structure of weighted networks. Firstly, the exponential family distribution is utilized to represent the generative process of weighted networks. Then, using stochastic variational inference technique, the online-weighted stochastic block model ( ON-WSBM) is developed to efficiently approximate the posterior distribution of underlying block structure. In ON-WSBM an incremental approach based on the subsampling method is adopted to reduce the time complexity of optimization, and then the stochastic optimization method is employed by using natural gradient to simplify the calculation and further accelerate the learning procedure. Extensive experiments on four popular datasets demonstrate that ON-WSBM can efficiently capture the community structure of the complex weighted networks, and can achieve comparatively high prediction accuracy in a short time.%随着互联网技术的进步，网络关系数据不断涌现，规模不断膨胀，网络数据的结构分析成为机器学习和网络应用领域的研究热点。为了提高推理效率，文中提出加权网络的在线结构学习算法。首先，使用指数族分布描述加权网络的生成过程。然后，利用随机变分推理方法，构建加权网络的在线结构学习算法。该算法采用基于重采样技术的增量学习方式，降低优化的时间复杂度。最后，利用基于自然梯度理论的随机优化方法进一步加速学习过程，实现网络社区结构的在线学习和实时优化。通过与传统的离线学习算法进行对比实验，验证文中算法能高效快速地实现复杂加权网络的社区结构学习，并在较短时间
Moussa, Jonathan E
2014-01-07
The random-phase approximation with second-order screened exchange (RPA+SOSEX) is a model of electron correlation energy with two caveats: its accuracy depends on an arbitrary choice of mean field, and it scales as O(n(5)) operations and O(n(3)) memory for n electrons. We derive a new algorithm that reduces its scaling to O(n(3)) operations and O(n(2)) memory using controlled approximations and a new self-consistent field that approximates Brueckner coupled-cluster doubles theory with RPA+SOSEX, referred to as Brueckner RPA theory. The algorithm comparably reduces the scaling of second-order Mo̸ller-Plesset perturbation theory with smaller cost prefactors than RPA+SOSEX. Within a semiempirical model, we study H2 dissociation to test accuracy and Hn rings to verify scaling.
Moussa, Jonathan E., E-mail: godotalgorithm@gmail.com [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)
2014-01-07
The random-phase approximation with second-order screened exchange (RPA+SOSEX) is a model of electron correlation energy with two caveats: its accuracy depends on an arbitrary choice of mean field, and it scales as O(n{sup 5}) operations and O(n{sup 3}) memory for n electrons. We derive a new algorithm that reduces its scaling to O(n{sup 3}) operations and O(n{sup 2}) memory using controlled approximations and a new self-consistent field that approximates Brueckner coupled-cluster doubles theory with RPA+SOSEX, referred to as Brueckner RPA theory. The algorithm comparably reduces the scaling of second-order Møller-Plesset perturbation theory with smaller cost prefactors than RPA+SOSEX. Within a semiempirical model, we study H{sub 2} dissociation to test accuracy and H{sub n} rings to verify scaling.
Jin, Curtis; Michielssen, Eric; Rand, Stephen
2013-01-01
Scattering hinders the passage of light through random media and consequently limits the usefulness of optical techniques for sensing and imaging. Thus, methods for increasing the transmission of light through such random media are of interest. Against this backdrop, Dorokhov, Pendry and others theoretically predicted the existence of a few highly transmitting eigen-wavefronts with transmission coefficients close to one in strongly backscattering random media. The breakthrough experiments of Vellekoop and Mosk confirmed the existence of these highly transmitting eigen-wavefronts and demonstrated that they could be discovered by using information from the far side of the scattering medium. Here, we numerically analyze this phenomenon in 2-D with fully spectrally accurate simulators and provide rigorous numerical evidence confirming the existence of these highly transmitting eigen-wavefronts in random media composed of hundreds of thousands of non-absorbing scatterers. We then develop physically realizable algo...
Muuro, Maina Elizaphan; Oboko, Robert; Wagacha, Waiganjo Peter
2016-01-01
In this paper we explore the impact of an intelligent grouping algorithm based on learners' collaborative competency when compared with (a) instructor based Grade Point Average (GPA) method level and (b) random method, on group outcomes and group collaboration problems in an online collaborative learning environment. An intelligent grouping…
Dipak Kumar Jana
2013-01-01
Full Text Available An inventory model for deteriorating item is considered in a random planning horizon under inflation and time value money. The model is described in two different environments: random and fuzzy random. The proposed model allows stock-dependent consumption rate and shortages with partial backlogging. In the fuzzy stochastic model, possibility chance constraints are used for defuzzification of imprecise expected total profit. Finally, genetic algorithm (GA and fuzzy simulation-based genetic algorithm (FSGA are used to make decisions for the above inventory models. The models are illustrated with some numerical data. Sensitivity analysis on expected profit function is also presented. Scope and Purpose. The traditional inventory model considers the ideal case in which depletion of inventory is caused by a constant demand rate. However, to keep sales higher, the inventory level would need to remain high. Of course, this would also result in higher holding or procurement cost. Also, in many real situations, during a longer-shortage period some of the customers may refuse the management. For instance, for fashionable commodities and high-tech products with short product life cycle, the willingness for a customer to wait for backlogging is diminishing with the length of the waiting time. Most of the classical inventory models did not take into account the effects of inflation and time value of money. But in the past, the economic situation of most of the countries has changed to such an extent due to large-scale inflation and consequent sharp decline in the purchasing power of money. So, it has not been possible to ignore the effects of inflation and time value of money any more. The purpose of this paper is to maximize the expected profit in the random planning horizon.
Drozdov, Daniel; Schwarz, Stefanie; Kutz, Alexander; Grolimund, Eva; Rast, Anna Christina; Steiner, Deborah; Regez, Katharina; Schild, Ursula; Guglielmetti, Merih; Conca, Antoinette; Reutlinger, Barbara; Ottiger, Cornelia; Buchkremer, Florian; Haubitz, Sebastian; Blum, Claudine
2015-01-01
Background Urinary tract infections (UTIs) are common drivers of antibiotic use. The minimal effective duration of antibiotic therapy for UTIs is unknown, but any reduction is important to diminish selection pressure for antibiotic resistance, costs, and drug-related side-effects. The aim of this study was to investigate whether an algorithm based on procalcitonin (PCT) and quantitative pyuria reduces antibiotic exposure. Methods From April 2012 to March 2014, we conducted a factorial design ...
2014-01-01
Background Arthritis and musculoskeletal conditions are the leading cause of long-term work disability (WD), an outcome with a major impact on quality of life and a high cost to society. The importance of decreased at-work productivity has also recently been recognized. Despite the importance of these problems, few interventions have been developed to reduce the impact of arthritis on employment. We have developed a novel intervention called “Making It Work”, a program to help people with inflammatory arthritis (IA) deal with employment issues, prevent WD and improve at-work productivity. After favorable results in a proof-of-concept study, we converted the program to a web-based format for broader dissemination and improved accessibility. The objectives of this study are: 1) to evaluate in a randomized controlled trial (RCT) the effectiveness of the program at preventing work cessation and improving at-work productivity; 2) to perform a cost-utility analysis of the intervention. Methods/Design 526 participants with IA will be recruited from British Columbia, Alberta, and Ontario in Canada. The intervention consists of a) 5 online group sessions; b) 5 web-based e-learning modules; c) consultations with an occupational therapist for an ergonomic work assessment and a vocational rehabilitation counselor. Questionnaires will be administered online at baseline and every 6 months to collect information about demographics, disease measures, costs, work-related risk factors for WD, quality of life, and work outcomes. Primary outcomes include at-work productivity and time to work cessation of > 6 months for any reason. Secondary outcomes include temporary work cessation, number of days missed from work per year, reduction in hours worked per week, quality adjusted life year for the cost utility analysis, and changes from baseline in employment risk factors. Analysis of Variance will evaluate the intervention’s effect on at-work productivity, and multivariable
两次随机丢包的被动队列管理算法%Passive Queue Management Algorithm Based on Twice Randomly Dropping Packets
姜文刚; 孙金生; 王执铨
2011-01-01
The active queue management increases the spending of hardware resources and computation amount,and also there exists sensitive parameter settings and the phenomenon of response lagging behind actual network in it,so the active queue management has not been widely used in actual network.Therefore,the drop tail passive queue management was improved,which is most widely used now,and the passive queue management algorithm based on twice randomly dropping packets was proposed.When the queue is full,randomly dropping packets in twice,which will improve the defects of drop tail,and develop the transmission performance of the network;the concept of speed fairness was put forward.This passive queue management algorithm based on twice randomly dropping packets will punish ＂TCP link＂ greatly,which occupies more spaces in the queue,and improve the RTT fairness and speed fairness effectively;Computation of this algorithm is small;the NS2 simulation results show the effectiveness of the algorithm.%主动队列管理算法增加了硬件资源和运算量的开销,并且存在参数设置敏感,响应相对滞后于实际网络状况的缺陷,并没有在实际网络上推广使用。因此对目前使用最多的弃尾被动队列管理进行改进,提出了两次随机丢包的被动队列管理算法。在队列满时,两次随机丢弃队列中的数据包,改善弃尾队列管理的缺陷,提高了网络传输性能;提出了速度公平性的概念,两次随机丢包的被动队列管理算法对占据队列较多的TCP链接有更好的惩罚作用,能有效提高RTT公平性和速度公平性;该算法计算量小;NS2仿真表明该算法的有效性。
Guais, J.C. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1967-07-01
After a brief survey of classical techniques for static optimization, we present a Random seeking method for any function, of an arbitrary number of variables, with constraints. The resulting program is shown and illustrated by some examples. The comparison with classical methods points out the advantages of Random in some cases where analytic procedures fail or require too much calculation time. (author) [French] Apres une rapide revue des differents procedes actuels d'optimisation statique, on expose une methode de recherche aleatoire du minimum (ou du maximum) d'une fonction quelconque, definie sur un nombre theoriquement illimite de parametres independants, avec contraintes. Le programme resultant est presente. Il est illustre par quelques exemples simples et compare a des methodes d'optimisation classiques; Ceci montre en particulier que le programme RANDOM permet une recherche aisee d'extrema dans certains cas ou d'autres programmes ne conduisent pas a des solutions satisfaisantes ou bien demandent un temps calcul prohibitif. (auteur)
Olivennes, F.; Trew, G.; Borini, A.; Broekmans, F.; Arriagada, P.; Warne, D. W.; Howles, C. M.
2015-01-01
In this randomized, controlled, open-label, phase IV study, ovarian response after a follitropin alfa starting dose determined by the CONSORT calculator was compared with a standard dose (150 IU). Normo-ovulatory women (aged 18-34 years) eligible for assisted reproductive techniques were recruited (
Loosvelt, Lien; Peters, Jan; Skriver, Henning
2012-01-01
acquired by the Danish EMISAR on four dates within the period April to July in 1998. The predictive capacity of each feature is analyzed by the importance score generated by random forests (RF). Results show that according to the variation in importance score over time, a distinction can be made between...
Duhem Stephane
2011-01-01
Full Text Available Abstract Background Suicide attempts (SA constitute a serious clinical problem. People who attempt suicide are at high risk of further repetition. However, no interventions have been shown to be effective in reducing repetition in this group of patients. Methods/Design Multicentre randomized controlled trial. We examine the effectiveness of «ALGOS algorithm»: an intervention based in a decisional tree of contact type which aims at reducing the incidence of repeated suicide attempt during 6 months. This algorithm of case management comprises the two strategies of intervention that showed a significant reduction in the number of SA repeaters: systematic telephone contact (ineffective in first-attempters and «Crisis card» (effective only in first-attempters. Participants who are lost from contact and those refusing healthcare, can then benefit from «short letters» or «postcards». Discussion ALGOS algorithm is easily reproducible and inexpensive intervention that will supply the guidelines for assessment and management of a population sometimes in difficulties with healthcare compliance. Furthermore, it will target some of these subgroups of patients by providing specific interventions for optimizing the benefits of case management strategy. Trial Registration The study was registered with the ClinicalTrials.gov Registry; number: NCT01123174.
Ramadhani, T.; Hertono, G. F.; Handari, B. D.
2017-07-01
The Multiple Traveling Salesman Problem (MTSP) is the extension of the Traveling Salesman Problem (TSP) in which the shortest routes of m salesmen all of which start and finish in a single city (depot) will be determined. If there is more than one depot and salesmen start from and return to the same depot, then the problem is called Fixed Destination Multi-depot Multiple Traveling Salesman Problem (MMTSP). In this paper, MMTSP will be solved using the Ant Colony Optimization (ACO) algorithm. ACO is a metaheuristic optimization algorithm which is derived from the behavior of ants in finding the shortest route(s) from the anthill to a form of nourishment. In solving the MMTSP, the algorithm is observed with respect to different chosen cities as depots and non-randomly three parameters of MMTSP: m, K, L, those represents the number of salesmen, the fewest cities that must be visited by a salesman, and the most number of cities that can be visited by a salesman, respectively. The implementation is observed with four dataset from TSPLIB. The results show that the different chosen cities as depots and the three parameters of MMTSP, in which m is the most important parameter, affect the solution.
唐昊; 奚宏生; 殷保群
2004-01-01
基于Markov性能势理论和神经元动态规划(NDP)方法,研究一类连续时间Markov决策过程(MDP)在随机平稳策略下的仿真优化问题,给出的算法是把一个连续时间过程转换成其一致化Markov链,然后通过其单个样本轨道来估计平均代价性能指标关于策略参数的梯度,以寻找次优策略,该方法适合于解决大状态空间系统的性能优化问题.并给出了一个受控Markov过程的数值实例.%Based on the theory of Markov performance potentials and neuro-dynamic programming (NDP) methodology, we study simulation optimization algorithm for a class of continuous time Markov decision processes (CTMDPs) under randomized stationary policies. The proposed algorithm will estimate the gradient of average cost performance measure with respect to policy parameters by transforming a continuous time Markov process into a uniform Markov chain and simulating a single sample path of the chain. The goal is to look for a suboptimal randomized stationary policy. The algorithm derived here can meet the needs of performance optimization of many difficult systems with large-scale state space. Finally, a numerical example for a controlled Markov process is provided.
Gnagnarella, Patrizia; Misotti, Alessandro Maria; Santoro, Luigi; Akoumianakis, Demosthenes; Del Campo, Laura; De Lorenzo, Francesco; Lombardo, Claudio; Milolidakis, Giannis; Sullivan, Richard; McVie, John Gordon
2016-09-01
We hypothesized that cancer patients using an Internet website would show an improvement in the knowledge about healthy eating habits, and this might be enhanced by social media interaction. A 6-month randomized intervention was set up. Eligible subjects were allocated in intervention (IG) and control groups (CG). IG had access to the website, and CG was provided with printed versions. All enrolled participants filled in Nutrition Questionnaire (NQ), Quality of Life Questionnaire (QoL) and Psychological Distress Inventory (PDI), at baseline and after 6 months. The difference between post- vs pre-questionnaires was calculated. Seventy-four subjects (CG 39; IG 35) completed the study. There was an increase in the score after the intervention in both groups for the NQ, even if not statistically significant. Dividing the IG into three categories, no (NI), low (LI) and high interactions (HI), we found a decreased score (improvement) in the CG (-0.2) and in the HI (-1.7), and an increased score (worsening) in the NI (+3.3) (p = NS) analysing the PDI. We found an increased score in the QoL both in CG and IG (adjusted LSMeans +3.5 and +2.8 points, respectively; p = NS). This study represents an example for support cancer patients. Despite the lack of significant effects, critical points and problems encountered may be of interest to researchers and organization working in the cancer setting. Intervention strategies to support patients during the care process are needed in order to attain the full potential of patient-centred care on cancer outcomes.
Near-Optimal Random Walk Sampling in Distributed Networks
Sarma, Atish Das; Pandurangan, Gopal
2012-01-01
Performing random walks in networks is a fundamental primitive that has found numerous applications in communication networks such as token management, load balancing, network topology discovery and construction, search, and peer-to-peer membership management. While several such algorithms are ubiquitous, and use numerous random walk samples, the walks themselves have always been performed naively. In this paper, we focus on the problem of performing random walk sampling efficiently in a distributed network. Given bandwidth constraints, the goal is to minimize the number of rounds and messages required to obtain several random walk samples in a continuous online fashion. We present the first round and message optimal distributed algorithms that present a significant improvement on all previous approaches. The theoretical analysis and comprehensive experimental evaluation of our algorithms show that they perform very well in different types of networks of differing topologies. In particular, our results show h...
Aaron Fisher
2014-10-01
Full Text Available Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC. Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%] of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%] of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1 that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2 data analysts can be trained to improve detection of statistically significant results with practice, but (3 data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.
Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.
Fisher, Aaron; Anderson, G. Brooke; Peng, Roger
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457
基于Mean Shift和随机游走的图像分割算法%Image Segmentation Algorithm Based on Mean Shift and Random Walk
穆克; 程伟; 褚俊霞
2012-01-01
An improved random walk algorithm was proposed herein.First,Mean Shift algorithm was adopted to preprocess the image,which was partitioned into a series of homogeneous areas,so that the homogeneous areas were taken as nodes to walk at random,with noise inhibited while reducing the number of nodes.Second,PMD was used to define the weight between regions.Thirdly,seeds were improved to have added the auxiliary seeds,and the auxiliary and signed seeds were used to walk random,with region merging realized.The final image segmentation was reached.Experimental results expatiates that the proposed method highlights the segmentation accuracy.%提出了一种改进的随机游走算法。首先,采用Mean Shift算法对图像进行预处理,将图像划分成一些同质区域,用同质区域作为节点进行随机游走,在降低节点数的同时也抑制了噪声对分割的影响;其次,利用马氏距离定义区域之间的权值;对种子点进行了改进,增加了辅助种子点,利用辅助种子点和用户标记的种子点进行随机游走,实现同质区域的合并,实现图像的最终分割。实验结果表明,该算法提高了图像分割的精度。
Niazi, Ahtsham U; Tait, Gordon; Carvalho, Jose C A; Chan, Vincent W
2013-05-01
The use of ultrasound for neuraxial blockade is a new application of technology that is rapidly becoming accepted as a standard of care. This new skill has shown to improve success, but it is a challenge to teach. To assist with teaching the use of ultrasound in regional anesthesia of the lumbar spine, we have developed an interactive educational model ( http://pie.med.utoronto.ca/vspine or http://www.usra.ca/vspine.php ). In this study, we aimed to determine whether use of this model for a two-week period would improve the performance of novice operators in determining defined landmarks during real-time ultrasound imaging of the lumbar spine. We evaluated the educational benefit of the ultrasound module by randomly assigning 16 postgraduate first-year (PGY1) anesthesia residents to either a control group with password-protected access to only the lumbar anatomy module or to an intervention group with access to the complete module. All residents had access to the module for two weeks following a full-day workshop that is part of the university teaching program which consists of a didactic lecture on ultrasound-facilitated neuraxial anesthesia, mentored teaching on cadaveric spine dissections, and hands-on ultrasound scanning of live models. At the end of the two weeks, the performance of the residents was evaluated using a 12-item task-specific checklist while carrying out a scout scan on a live model. The control group had a median score of 5.5 (25(th) percentile: 4, 75(th) percentile: 18), while the intervention group had a median score of 11.5 (25(th) percentile: 8, 75(th) percentile: 12) in the task-specific checklist, with a significant difference of 6 (confidence interval 1.5 to 10.5) between groups (P = 0.021). Our results show superior performance by the residents who had access to both components of the module, indicating that access to the interactive ultrasound spine module improves knowledge and skills prior to clinical care.
ZHANG Ning-Yu; TENG Shu-Yun; SONG Hong-Sheng; LIU Gui-Yuan; CHENG Chuan-Fu
2009-01-01
A method for simultaneously extracting the parameters of self-affine fractal surfaces from a single experimental profile of scattered intensity data is proposed. The Levenberg-Marquardt algorithm is introduced to fit the theoretical equation for the scattering intensity profile to the experimental data.A precision system is designed for acquisition of scattering intensity data using the Boxcar integration technique. The surface parameters extracted (root-mean-square roughness w,lateral correlation length ξ,and roughness exponent a) are compared to those obtained using atomic force microscopy.
Hande Küçükönder
2015-02-01
Full Text Available This study was conducted in order to determine the effect of the mechanical properties such as maximum force at the skin rupture point, energy at the skin rupture point and the skin firmness on color maturity of tomato by supervised learning algorithms of data mining. In the present study, a total of 88 tomato samples were used, and color measurements for each tomato in 4 different equatorial regions were performed and a total of 352 color measurement units were used. In the classification processes performed according to these mechanical properties, K-Star, Random Forest and Decision Tree (C4.5 algorithms of data mining were utilized, and in the comparison of comprising classification models, Root Mean Square Error (RMSE, Mean absolute error (MAE, Root relative squared error (RRSE and Relative absolute error (RAE values, which are some of the criteria of error variance, were considered to be low, while the classification accuracy rate was considered to be high. As a result of the comparison made, the classification model formed according to K-Star instance-based algorithm [MAE: 0.004, RMSE: 0.006, %RAE: 1.73, %RRSE: 1.70] has been found to be a better classifier compared to the others. With the classification made according to K-Star algorithm, the maximum force at the skin rupture point on the degree of maturity of tomato and the skin firmness were found to be green, light red, and their effects are non-significant during the color conversion periods, and found significant during other periods while the energy at the skin rupture point is only pink and has been to be significant during the color conversion stages and non-significant during other stages.
Ruzek, Josef I; Rosen, Raymond C; Marceau, Lisa; Larson, Mary Jo; Garvert, Donn W; Smith, Lauren; Stoddard, Anne
2012-05-14
This paper presents the rationale and methods for a randomized controlled evaluation of web-based training in motivational interviewing, goal setting, and behavioral task assignment. Web-based training may be a practical and cost-effective way to address the need for large-scale mental health training in evidence-based practice; however, there is a dearth of well-controlled outcome studies of these approaches. For the current trial, 168 mental health providers treating post-traumatic stress disorder (PTSD) were assigned to web-based training plus supervision, web-based training, or training-as-usual (control). A novel standardized patient (SP) assessment was developed and implemented for objective measurement of changes in clinical skills, while on-line self-report measures were used for assessing changes in knowledge, perceived self-efficacy, and practice related to cognitive behavioral therapy (CBT) techniques. Eligible participants were all actively involved in mental health treatment of veterans with PTSD. Study methodology illustrates ways of developing training content, recruiting participants, and assessing knowledge, perceived self-efficacy, and competency-based outcomes, and demonstrates the feasibility of conducting prospective studies of training efficacy or effectiveness in large healthcare systems.
Ruzek Josef I
2012-05-01
Full Text Available Abstract This paper presents the rationale and methods for a randomized controlled evaluation of web-based training in motivational interviewing, goal setting, and behavioral task assignment. Web-based training may be a practical and cost-effective way to address the need for large-scale mental health training in evidence-based practice; however, there is a dearth of well-controlled outcome studies of these approaches. For the current trial, 168 mental health providers treating post-traumatic stress disorder (PTSD were assigned to web-based training plus supervision, web-based training, or training-as-usual (control. A novel standardized patient (SP assessment was developed and implemented for objective measurement of changes in clinical skills, while on-line self-report measures were used for assessing changes in knowledge, perceived self-efficacy, and practice related to cognitive behavioral therapy (CBT techniques. Eligible participants were all actively involved in mental health treatment of veterans with PTSD. Study methodology illustrates ways of developing training content, recruiting participants, and assessing knowledge, perceived self-efficacy, and competency-based outcomes, and demonstrates the feasibility of conducting prospective studies of training efficacy or effectiveness in large healthcare systems.
Gander, Fabian; Proyer, René T; Ruch, Willibald
2016-01-01
Seligman (2002) suggested three paths to well-being, the pursuit of pleasure, the pursuit of meaning, and the pursuit of engagement, later adding two more, positive relationships and accomplishment, in his 2011 version. The contribution of these new components to well-being has yet to be addressed. In an online positive psychology intervention study, we randomly assigned 1624 adults aged 18-78 (M = 46.13; 79.2% women) to seven conditions. Participants wrote down three things they related to either one of the five components of Seligman's Well-Being theory (Conditions 1-5), all of the five components (Condition 6) or early childhood memories (placebo control condition). We assessed happiness (AHI) and depression (CES-D) before and after the intervention, and 1-, 3-, and 6 months afterwards. Additionally, we considered moderation effects of well-being levels at baseline. Results confirmed that all interventions were effective in increasing happiness and most ameliorated depressive symptoms. The interventions worked best for those in the middle-range of the well-being continuum. We conclude that interventions based on pleasure, engagement, meaning, positive relationships, and accomplishment are effective strategies for increasing well-being and ameliorating depressive symptoms and that positive psychology interventions are most effective for those people in the middle range of the well-being continuum.
Fabian eGander
2016-05-01
Full Text Available Objective: Seligman (2002 suggested three paths to well-being, the pursuit of pleasure, the pursuit of meaning, and the pursuit of engagement, later adding two more, positive relationships and accomplishment, in his 2011 version. The contribution of these new components to well-being has yet to be addressed.Method: In an online positive psychology intervention study, we randomly assigned 1,624 adults aged 18 to 78 (M = 46.13; 79.2% women to seven conditions. Participants wrote down three things they related to either one of the five components of Seligman’s Well-Being theory (Conditions 1-5, all of the five components (Condition 6 or early childhood memories (placebo control condition. We assessed happiness (AHI and depression (CES-D before and after the intervention, and 1-, 3-, and 6 months afterwards. Additionally, we considered moderation effects of well-being levels at baseline.Results: Results confirmed that all interventions were effective in increasing happiness and most ameliorated depressive symptoms. The interventions worked best for those in the middle-range of the well-being continuum. Conclusion: We conclude that interventions based on pleasure, engagement, meaning, positive relationships, and accomplishment are effective strategies for increasing well-being and ameliorating depressive symptoms and that positive psychology interventions are most effective for those people in the middle range of the well-being continuum.
Ma, Xin; Guo, Jing; Xiao, Ke; Sun, Xiao
2015-01-01
The prediction of RNA-binding proteins is an incredibly challenging problem in computational biology. Although great progress has been made using various machine learning approaches with numerous features, the problem is still far from being solved. In this study, we attempt to predict RNA-binding proteins directly from amino acid sequences. A novel approach, PRBP predicts RNA-binding proteins using the information of predicted RNA-binding residues in conjunction with a random forest based method. For a given protein, we first predict its RNA-binding residues and then judge whether the protein binds RNA or not based on information from that prediction. If the protein cannot be identified by the information associated with its predicted RNA-binding residues, then a novel random forest predictor is used to determine if the query protein is a RNA-binding protein. We incorporated features of evolutionary information combined with physicochemical features (EIPP) and amino acid composition feature to establish the random forest predictor. Feature analysis showed that EIPP contributed the most to the prediction of RNA-binding proteins. The results also showed that the information from the RNA-binding residue prediction improved the overall performance of our RNA-binding protein prediction. It is anticipated that the PRBP method will become a useful tool for identifying RNA-binding proteins. A PRBP Web server implementation is freely available at http://www.cbi.seu.edu.cn/PRBP/.
张月; 张奕
2012-01-01
以＂计算机应用基础＂课程的网络考试系统开发为基础,讨论了使用面向对象分析方法建立与改进命题库数据模型并将其映射为关系模型的过程。该命题库是网络考试系统的关键组成部分即题库子系统中的核心内容。最后,在网络考试系统组卷功能的实现方面,给出了随机组卷算法的具体实现。%On the basis of developing a Online Exam System for Basics of Computer Application,object oriented analysis and design technology is used to establish and improve question bank model and transform to relational model were discussed.Question bank database is a key part of the online exam system（Question Bank Sub System）.To implement function of examination paper generation,randomized algorithm is used.
Double Motor Coordinated Control Based on Hybrid Genetic Algorithm and CMAC
Cao, Shaozhong; Tu, Ji
A novel hybrid cerebellar model articulation controller (CMAC) and online adaptive genetic algorithm (GA) controller is introduced to control two Brushless DC motor (BLDCM) which applied in a biped robot. Genetic Algorithm simulates the random learning among the individuals of a group, and CMAC simulates the self-learning of an individual. To validate the ability and superiority of the novel algorithm, experiments have been done in MATLAB/SIMULINK. Analysis among GA, hybrid GA-CMAC and CMAC feed-forward control is also given. The results prove that the torque ripple of the coordinated control system is eliminated by using the hybrid GA-CMAC algorithm.
Rohrbach, F; Vesztergombi, G
1997-01-01
In the near future, the computer performance will be completely determined by how long it takes to access memory. There are bottle-necks in memory latency and memory-to processor interface bandwidth. The IRAM initiative could be the answer by putting Processor-In-Memory (PIM). Starting from the massively parallel processing concept, one reached a similar conclusion. The MPPC (Massively Parallel Processing Collaboration) project and the 8K processor ASTRA machine (Associative String Test bench for Research \\& Applications) developed at CERN \\cite{kuala} can be regarded as a forerunner of the IRAM concept. The computing power of the ASTRA machine, regarded as an IRAM with 64 one-bit processors on a 64$\\times$64 bit-matrix memory chip machine, has been demonstrated by running statistical physics algorithms: one-dimensional stochastic cellular automata, as a simple model for dynamical phase transitions. As a relevant result for physics, the damage spreading of this model has been investigated.
适用于时变系统的在线盲源分离算法研究%Research on on-line blind source separation algorithm for time-vary-ing system
邓大鹏; 李骏; 丁德强; 贺翥祯
2016-01-01
针对传统EASI算法收敛速率与稳态误差之间的矛盾,提出了一种基于估计函数期望的步长自适应算法(New Adaptive EASI),为了使这种算法能够更好地解决时变系统中不同条件下的盲源分离问题,提高信号的分离精度,建立了一种混合矩阵变化的在线检测机制,并将这种在线检测机制加入步长自适应算法中,对算法进行了改进.仿真实验表明,这种改进的步长自适应算法能够提高盲源分离初始阶段或是信道变化后分离初始阶段的信号恢复质量,解决源信号为非零均值信号时的盲源分离问题,并且能够准确地在线估计源信号的个数,实现信源数变化条件下的盲源分离.%Aiming at the contradiction between the convergence rate and the steady-state error of the traditional EASI algorithm, this paper presents a New Adaptive EASI algorithm based on the expectation of estimate function. In order to make the new algorithm be able to solve the BSS problems under different conditions of time-varying system better and improve the accuracy of separated signals, it sets up an on-line detection mechanism of the changes in mixing matrix and adds it to the New Adaptive EASI algorithm to modify the algorithm. Simulation results show that the modified New Adaptive EASI algorithm can improve the quality of separated signals in the initial stage of separation or just after the channel condition changes, do the on-line separation of nonzero mean mixed signals and accurately estimate the number of source signals online to solve the BSS problem under the condition that the number of the source signals varies during the separating process.
基于随机信标的水下 SLAM 导航方法%Underwater SLAM navigation algorithm based on random beacons
刘明雍; 董婷婷; 张立川
2015-01-01
研究了基于随机信标的水下同时制图定位（simultaneous localization and mapping，SLAM）导航定位方法。信标导航是目前导航领域的研究热点，但往往需要提前对信标位置进行标定。对此文中提出一种无需位置标定的随机信标导航方法，即在信标随机散布的情况下，通过量测信标和航行器间的距离和方位，用 SLAM 方法对随机信标位置进行估计，从而实现对航行器的导航，并在不同的信标密度和观测误差下分析了其导航精度。仿真结果表明，该导航方法具有良好的收敛性和定位精度。%An underwater simultaneous localization and mapping (SLAM)navigation algorithm based on random beacons is studied.At present,beacon navigation is extensively studied in the field of navigation,but it often needs to locate the beacon’s positon in advance.In this paper,we propose a navigation method based on random beacons that does not need position calibration,that is,when beacons are randomly dispersed,we use the SLAM method to estimate the position of the random beacons,thus to navigate the autonomous underwater vehicle (AUV)by observing the distance and orientation between the beacons and the AUV.We also analyze the navigation accuracy under different beacons density and different observation errors.Simulation results show that the algorithm performs well in both convergence and precision.
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Kaplan, Sezgin; Rabadi, Ghaith
2013-01-01
This article addresses the aerial refuelling scheduling problem (ARSP), where a set of fighter jets (jobs) with certain ready times must be refuelled from tankers (machines) by their due dates; otherwise, they reach a low fuel level (deadline) incurring a high cost. ARSP is an identical parallel machine scheduling problem with release times and due date-to-deadline windows to minimize the total weighted tardiness. A simulated annealing (SA) and metaheuristic for randomized priority search (Meta-RaPS) with the newly introduced composite dispatching rule, apparent piecewise tardiness cost with ready times (APTCR), are applied to the problem. Computational experiments compared the algorithms' solutions to optimal solutions for small problems and to each other for larger problems. To obtain optimal solutions, a mixed integer program with a piecewise weighted tardiness objective function was solved for up to 12 jobs. The results show that Meta-RaPS performs better in terms of average relative error but SA is more efficient.
余世明; 王海青
2003-01-01
An improved generalized predictive control algorithm is presented in this paper by incorporating offlineidentification into online identification. Unlike the existing generalized predictive control algorithms, the proposedapproach divides parameters of a predictive model into the time invariant and time-varying ones, which are treatedrespectively by offiine and online identification algorithms. Therefore, both the reliability and accuracy of thepredictive model are improved. Two simulation examples of control of a fixed bed reactor show that this newalgorithm is not only reliable and stable in the case of uncertainties and abnormal disturbances, but also adaptableto slow time varying processes.
Online Coregularization for Multiview Semisupervised Learning
Boliang Sun
2013-01-01
Full Text Available We propose a novel online coregularization framework for multiview semisupervised learning based on the notion of duality in constrained optimization. Using the weak duality theorem, we reduce the online coregularization to the task of increasing the dual function. We demonstrate that the existing online coregularization algorithms in previous work can be viewed as an approximation of our dual ascending process using gradient ascent. New algorithms are derived based on the idea of ascending the dual function more aggressively. For practical purpose, we also propose two sparse approximation approaches for kernel representation to reduce the computational complexity. Experiments show that our derived online coregularization algorithms achieve risk and accuracy comparable to offline algorithms while consuming less time and memory. Specially, our online coregularization algorithms are able to deal with concept drift and maintain a much smaller error rate. This paper paves a way to the design and analysis of online coregularization algorithms.
Denoising and Trend Terms Elimination Algorithm of Accelerometer Signals
Peng Zhang
2016-01-01
Full Text Available Acceleration-based displacement measurement approach is often used to measure the polish rod displacement in the oilfield pumping well. Random noises and trend terms of the accelerometer signals are the main factors that affect the measuring accuracy. In this paper, an efficient online learning algorithm is proposed to improve the measurement precision of polish rod displacement in the oilfield pumping well. To remove the random noises and eliminate the trend term of accelerometer signals, the ARIMA model and its parameters are firstly derived by using the obtained data of time series of acceleration sensor signals. Secondly, the period of the accelerometer signals is estimated through the Rife-Jane frequency estimation approach based on Fast Fourier Transform. With the obtained model and parameters, the random noises are removed by employing the Kalman filtering algorithm. The quadratic integration of the period is calculated to obtain the polish rod displacement. Moreover, the windowed recursive least squares algorithm is implemented to eliminate the trend terms. The simulation results demonstrate that the proposed online learning algorithm is able to remove the random noises and trend terms effectively and greatly improves the measurement accuracy of the displacement.
Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Maggi, Alessia; Stumpf, André; Ferrazzini, Valérie
2017-06-01
Monitoring the endogenous seismicity of volcanoes helps to forecast eruptions and prevent their related risks, and also provides critical information on the eruptive processes. Due the high number of events recorded during pre-eruptive periods by the seismic monitoring networks, cataloging each event can be complex and time-consuming if done by human operators. Automatic seismic signal processing methods are thus essential to build consistent catalogs based on objective criteria. We evaluated the performance of the ;Random Forests; (RF) machine-learning algorithm for classifying seismic signals recorded at the Piton de la Fournaise volcano, La Réunion Island (France). We focused on the discrimination of the dominant event types (rockfalls and volcano-tectonic earthquakes) using over 19,000 events covering two time periods: 2009-2011 and 2014-2015. We parametrized the seismic signals using 60 attributes that were then given to RF algorithm. When the RF classifier was given enough training samples, its sensitivity (rate of good identification) exceeded 99%, and its performance remained high (above 90%) even with few training samples. The sensitivity collapsed when using an RF classifier trained with data from 2009 to 2011 to classify data from 2014 to 2015 catalog, because the physical characteristics of the rockfalls and hence their seismic signals had evolved between the two time-periods. The main attribute families (waveform, spectrum, spectrogram or polarization) were all found to be useful for event discrimination. Our work validates the performance of the RF algorithm and suggests it could be implemented at other volcanic observatories to perform automatic, near real-time, classification of seismic events.
陈思; 苏松志; 李绍滋; 吕艳萍; 曹冬林
2014-01-01
The self-training based discriminative tracking methods use the classification results to update the classifier itself. However, these methods easily suffer from the drifting issue because the classification errors are accumulated during tracking. To overcome the disadvantages of self-training based tracking methods, a novel co-training tracking algorithm, termed Co-SemiBoost, is proposed based on online semi-supervised boosting. The proposed algorithm employs a new online co-training framework, where unlabeled samples are used to collaboratively train the classifiers respectively built on two feature views. Moreover, the pseudo-labels and weights of unlabeled samples are iteratively predicted by combining the decisions of a prior model and an online classifier. The proposed algorithm can effectively improve the discriminative ability of the classifier, and is robust to occlusions, illumination changes, etc. Thus the algorithm can better adapt to object appearance changes. Experimental results on several challenging video sequences show that the proposed algorithm achieves promising tracking performance.%基于自训练的判别式目标跟踪算法使用分类器的预测结果更新分类器自身，容易累积分类错误，从而导致漂移问题。为了克服自训练跟踪算法的不足，该文提出一种基于在线半监督boosting的协同训练目标跟踪算法(简称Co-SemiBoost)，其采用一种新的在线协同训练框架，利用未标记样本协同训练两个特征视图中的分类器，同时结合先验模型和在线分类器迭代预测未标记样本的类标记和权重。该算法能够有效提高分类器的判别能力，鲁棒地处理遮挡、光照变化等问题，从而较好地适应目标外观的变化。在若干个视频序列的实验结果表明，该算法具有良好的跟踪性能。
基于训练集划分的随机森林算法%A Random Forest Algorithm Based on Training set Splitting
吴华芹
2013-01-01
本文提出了一种基于训练集划分的随机森林算法。该算法首先将多数类划分为多个不相交子集。然后将每个子集与少数类合并，进行决策树的训练。最后根据平均加权策略构建随机森林，并获取最终的分类规则。本文所提方法避免了原始样本信息的损失，而且保持了子分类器的样本平衡。在人工生成数据集上的仿真实验表明本文方法非常有效。%In this paper, a random forest algorithm based on the training set splitting is proposed. Firstly, the majority class is divided into multiple disjointed sunsets. Then combine each subset with the rare class to train a decision tree. Finally, construct a random forest based on the average weighted strategy, and obtain the final classification rules. The proposed method avoids the loss of the original sample information, and makes the training set balanced for each decision tree. Experiments on the artificial imbalanced data show that this method is very effective.
Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie
2016-04-01
In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied
何昌保; 马秀丽; 余长明
2016-01-01
For dual source CT image with contrast media,due to heart soft tissue density and contrast media uneven distribution result in the CT value of heart tissues uneven and boundary fuzzy,taking a single image segmentation algorithm is too difficult to obtain satisfactory results,so morphological reconstruction and random walks hybrid method is proposed in this paper.Firstly,we used morphological reconstruction operation on image smoothing filtering, which makes the heart cavity gray information convergence and gray level differences with the surrounding tissue and get the left atrium area with the fuzzy boundary;Then the random walks algorithm sets the seed points for each region of the image,and gives the weight of each side,and takes the weight of the edge as the transfer probability.For each unlabeled point is calculated from the point of first arrival probability of seed points.Finally,according to the first hit probability to choose the maximum that a class as belonging to the class,attribute of the unlabeled points and finally get the accurate left atrial.%针对在传统的CT介入式治疗过程中，胸腔中软组织较多软组织的厚度和注射的造影剂在心脏中呈现的不均匀分布，导致在采用CT成像的图像中胸腔内部各组织之间存在边界模糊或者确实等状况，本文提出一种采用形态重构和随机行走相结合的分割方法。首先利用形态学开闭运算对图像进行化简，并使得心脏 CT腔体边界分离，进而使得各个组织组织分离，再结合Random walks算法。从而使得不需要标记太多种子点的情况下提高了分割的速度和准确性，实验证明该方法能够达到预期的目标。
On-line methods for rotorcraft aeroelastic mode identification
Molusis, J. A.; Kleinman, D. L.
1982-01-01
The requirements for the on-line identification of rotorcraft aeroelastic blade modes from random response test data are presented. A recursive maximum likelihood (RML) technique is used in conjunction with a bandpass filter to identify isolated blade mode damping and frequency. The RML technique is demonstrated to have excellent convergence characteristics in random measurement noise and random process noise excitation. The RML identification technique uses an ARMA representation for the aeroelastic stochastic system and requires virtually no user interaction while providing accurate confidence bands on the parameter estimates. Comparisons are made with an off-line Newton type maximum likelihood algorithm which uses a state variable model representation. Results are presented from simulation random response data which quantify the identifed parameter convergence behavior for various levels of random excitation which is typical of wind tunnel turbulence levels. The RML technique is applied to hingless rotor test data from the NASA Langley Research Center Helicopter Hover Facility.
多核处理器上的并行联机分析处理算法研究%Parallel On-Line Analysis Processing Algorithms Research on Multi-Core CPUs
周国亮; 王桂兰; 朱永利
2013-01-01
Computer hardware technology has greatly developed, especially large memory and multi-core, but the algorithm efficiency dose not improve with the development of hardware. The fundamental reason is the insufficient utilization of CPU cache and the limitation of single-thread programming. In the field of OLAP (on-line analysis processing), data cube computation is an important and time-consuming operation, so how to improve the perfor-mance of data cube computation is a difficult research point in this field. Based on the characteristics of multi-core CPUs, this paper proposes two parallel algorithms, MT-Multi-Way (multi-threading multi-way) and MT-BUC (multi-threading bottom-up computation), which utilize data partition and multi-thread cooperation. All these algo-rithms avoid cache contentions between threads and keep loading balance, so obtain near-linear speedup. Based on these algorithms, this paper suggests one unified framework for cube computation on multi-core CPUs, including how to partition data and resolve recursive program on multi-core CPUs for guiding cube computation parallelization.
刘森; 于赞梅; 桑天松
2014-01-01
As the traditional algorithm is sensitive to noise and has ineffective calculation ,the basic principle of Prony algo-rithm is introduced in this paper ,a new Prony method for on-line analysis of low-frequency oscillations between power gen-erators is discussed in this paper .The calculation steps of the method are described by way of the identification of the practi-cal order of the model ,estimation of the parameters and the preprocess of the signal .The simulation results show that this method can meet the needs of low-frequency oscillations online identification .%针对Prony算法抑噪能力差、计算效率低的问题，分析Prony算法的原理和主要参数的选择策略，提出一种基于Prony算法的发电机组间功率低频振荡在线辨识新方法，从模型有效阶数确定、AR参数估计、数据预处理等方面说明该方法的计算步骤，通过算例仿真及结果分析，认为该方法能够满足电力系统低频振荡在线辨识的需要。
An algorithm based on elliptic interpolation for generating random curves%一种基于椭圆插值的随机曲线生成算法
李玲; 魏玮
2012-01-01
在计算机图形学中,建立复杂的结构或形状的模型是一个核心问题.随机曲线的生成在计算机游戏,电影,建筑模型,城市规划和虚拟现实等领域中,也扮演着十分重要的角色.本文主要研究二值图像中随机产生曲线的算法,算法首先采用随机的方法产生初始点和终结点,再利用椭圆内随机插值的方法产生插值点,以新产生的点和其相邻点做为初始点与终结点,再利用椭圆内随机插值的方法产生新的插值点,依此类推最后得到由诸多插值点组成的整条曲线.该过程中为了保证曲线收敛,假设先产生的插值点对曲线的形成趋势影响大,后产生的随机点对曲线的形成趋势影响小.并且,在插值过程中,只对相邻插值点进行下一步插值.结合椭圆内产生的随机曲线的过程,使用Visual C++软件来实现随机曲线的生成算法并进行结果的详细分析,同时做相应的说明和结论来改善用户交互系统.%The building of geometric models with complex shapes and structures is one of the key issues in computer graphics. Random curves also play important roles in many domains such as computer games, movies, architectural models, urban planning, virtual reality etc. In this paper, we present a novel synthesis algorithm for procedurally generating random curves in binary image. The first step is to generate the starting point and ending point randomly; The second step is to generate the interpolative point by using the elliptic interpolation method; The third step is to generate the next new interpolative point by starting with the new point and ending with its adjacent point. These new points though quite few, finally constitute the whole curve. To ensure the convergence of the curve, we firstly assume that the earlier interpolative points have more impact on the trend of the curve, while the latter points has less impact. The second assumption is to generate new points only between the
Random neural Q-learning for obstacle avoidance of a mobile robot in unknown environments
Jing Yang
2016-07-01
Full Text Available The article presents a random neural Q-learning strategy for the obstacle avoidance problem of an autonomous mobile robot in unknown environments. In the proposed strategy, two independent modules, namely, avoidance without considering the target and goal-seeking without considering obstacles, are first trained using the proposed random neural Q-learning algorithm to obtain their best control policies. Then, the two trained modules are combined based on a switching function to realize the obstacle avoidance in unknown environments. For the proposed random neural Q-learning algorithm, a single-hidden layer feedforward network is used to approximate the Q-function to estimate the Q-value. The parameters of the single-hidden layer feedforward network are modified using the recently proposed neural algorithm named the online sequential version of extreme learning machine, where the parameters of the hidden nodes are assigned randomly and the sample data can come one by one. However, different from the original online sequential version of extreme learning machine algorithm, the initial output weights are estimated subjected to quadratic inequality constraint to improve the convergence speed. Finally, the simulation results demonstrate that the proposed random neural Q-learning strategy can successfully solve the obstacle avoidance problem. Also, the higher learning efficiency and better generalization ability are achieved by the proposed random neural Q-learning algorithm compared with the Q-learning based on the back-propagation method.
Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted Allocations
Aggarwal, Gagan; Karande, Chinmay; Mehta, Aranyak
2010-01-01
We study the following vertex-weighted online bipartite matching problem: $G(U, V, E)$ is a bipartite graph. The vertices in $U$ have weights and are known ahead of time, while the vertices in $V$ arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in $U$. When all the weights are equal, this reduces to the classic \\emph{online bipartite matching} problem for which Karp, Vazirani and Vazirani gave an optimal $\\left(1-\\frac{1}{e}\\right)$-competitive algorithm in their seminal work~\\cite{KVV90}. Our main result is an optimal $\\left(1-\\frac{1}{e}\\right)$-competitive randomized algorithm for general vertex weights. We use \\emph{random perturbations} of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in~\\cite{KVV90} in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the p...
Yan, Dong; de Beurs, Kirsten M.
2016-05-01
The objective of this paper is to demonstrate a new method to map the distributions of C3 and C4 grasses at 30 m resolution and over a 25-year period of time (1988-2013) by combining the Random Forest (RF) classification algorithm and patch stable areas identified using the spatial pattern analysis software FRAGSTATS. Predictor variables for RF classifications consisted of ten spectral variables, four soil edaphic variables and three topographic variables. We provided a confidence score in terms of obtaining pure land cover at each pixel location by retrieving the classification tree votes. Classification accuracy assessments and predictor variable importance evaluations were conducted based on a repeated stratified sampling approach. Results show that patch stable areas obtained from larger patches are more appropriate to be used as sample data pools to train and validate RF classifiers for historical land cover mapping purposes and it is more reasonable to use patch stable areas as sample pools to map land cover in a year closer to the present rather than years further back in time. The percentage of obtained high confidence prediction pixels across the study area ranges from 71.18% in 1988 to 73.48% in 2013. The repeated stratified sampling approach is necessary in terms of reducing the positive bias in the estimated classification accuracy caused by the possible selections of training and validation pixels from the same patch stable areas. The RF classification algorithm was able to identify the important environmental factors affecting the distributions of C3 and C4 grasses in our study area such as elevation, soil pH, soil organic matter and soil texture.
Cameron, David; Epton, Tracy; Norman, Paul; Sheeran, Paschal; Harris, Peter R; Webb, Thomas L; Julious, Steven A; Brennan, Alan; Thomas, Chloe; Petroczi, Andrea; Naughton, Declan; Shah, Iltaf
2015-12-07
This paper reports the results of a repeat trial assessing the effectiveness of an online theory-based intervention to promote healthy lifestyle behaviours in new university students. The original trial found that the intervention reduced the number of smokers at 6-month follow-up compared with the control condition, but had non-significant effects on the other targeted health behaviours. However, the original trial suffered from low levels of engagement, which the repeat trial sought to rectify. Three weeks before staring university, all incoming undergraduate students at a large university in the UK were sent an email inviting them to participate in the study. After completing a baseline questionnaire, participants were randomly allocated to intervention or control conditions. The intervention consisted of a self-affirmation manipulation, health messages based on the theory of planned behaviour and implementation intention tasks. Participants were followed-up 1 and 6 months after starting university. The primary outcome measures were portions of fruit and vegetables consumed, physical activity levels, units of alcohol consumed and smoking status at 6-month follow-up. The study recruited 2,621 students (intervention n=1346, control n=1275), of whom 1495 completed at least one follow-up (intervention n=696, control n=799). Intention-to-treat analyses indicated that the intervention had a non-significant effect on the primary outcomes, although the effect of the intervention on fruit and vegetable intake was significant in the per-protocol analyses. Secondary analyses revealed that the intervention had significant effects on having smoked at university (self-report) and on a biochemical marker of alcohol use. Despite successfully increasing levels of engagement, the intervention did not have a significant effect on the primary outcome measures. The relatively weak effects of the intervention, found in both the original and repeat trials, may be due to the focus on
彭志红; 孙琳; 陈杰
2012-01-01
为了解决无人机在部分未知敌对环境中的低空突防航迹规划问题，提出了一种改进的差分进化算法．该算法的进化模型采用冯·诺伊曼拓扑结构，并对其进行拓展，使种群在进化初期保持多样性，避免进化早期陷入局部最优，而进化后期加快收敛速度．该算法改进了差分进化算子中的变异操作，从而加快算法的收敛速度，快速找到多目标优化问题的最优解；同时，采用将绝对笛卡儿坐标和相对极坐标相结合的编码方式以提高搜索效率．将该算法用于无人机在线航迹规划仿真实验，并和未改进的算法结果作比较，验证了该算法的有效性．%An improved differential evolution algorithm was proposed for solving the online path planning problem of unmanned aerial vehicle （UAV） low-altitude penetration in partially known hostile environments. The algorithm adopts von Neumann topology and improves its structure to maintain the diversity of the population, prevent the population from falling into local optima in the early evolution and speed up the convergence rate in the later evolution as well. The mutation operator of differential evolution is improved to speed up the convergence rate of the algorithm, so that the optimal solution of the multi-objective optimization problem can be found quickly; the coding method combined the absolute Cartesian coordinates with the relative polar coordinates is used to improve the searching efficiency. The simulation experiment of online path planning for UAV low-altitude penetration shows that the proposed algorithm has a better performance than the unimproved differential evolution algorithm.
Polyceptron: A Polyhedral Learning Algorithm
Manwani, Naresh
2011-01-01
In this paper we propose a new algorithm for learning polyhedral classifiers which we call as Polyceptron. It is a Perception like algorithm which updates the parameters only when the current classifier misclassifies any training data. We give both batch and online version of Polyceptron algorithm. Finally we give experimental results to show the effectiveness of our approach.
屈耀红; 肖自兵; 袁冬莉
2012-01-01
To shorten the flight time of UAV, we propose an algorithm of UAV flight path planning on-line under battle field threats. Sections 1 through 3 of the full paper explain our method of flight path planning mentioned in the title, which we believe is better than the existing ones and whose core consists of: "UAV estimates the wind field information on-line using our proposed method during the flight. Then the choice of the extend nodes in A * search algorithm is considered according to the wind direction and the cost function is designed as the flight time". Section 3 is entitled " Method of UAV Flight Path Planning On-Line in Wind Field Using Improved A * Searching Algorithm"; for convenience, we divide it into four sub-sections: 3.1, 3.2, 3.3 and 3.4. Simulation results, presented in Figs. 4 and 5 and Table 2, and their analysis show preliminarily that the flight time is indeed less than that obtained with the traditional method based on the length of the flight path.%风场是影响无人机飞行速度的一个重要因素.为了缩短任务执行中的飞行时间,考虑战场环境存在外界威胁情况,提出了一种利用组合导航在线估计风场信息的无人机航迹规划方法.该方法基于飞行时间为代价,利用改进的A*搜索算法对航迹进行顺风搜索,从而实现最短理想耗时的航迹规划.计算机仿真结果表明,与传统的最短航迹长度规划方法相比,无人机按该方法规划的航迹飞行时,理想耗时最少.