WorldWideScience

Sample records for randomized online algorithms

  1. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  2. An algorithm for online optimization of accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States); Safranek, James [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Juhao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2013-10-01

    We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.

  3. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    Science.gov (United States)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  4. Seismic noise attenuation using an online subspace tracking algorithm

    Science.gov (United States)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  5. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  6. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  7. An algorithm for on-line price discrimination

    NARCIS (Netherlands)

    D.D.B. van Bragt; D.J.A. Somefun (Koye); E. Kutschinski; J.A. La Poutré (Han)

    2002-01-01

    textabstractThe combination of on-line dynamic pricing with price discrimination can be very beneficial for firms operating on the Internet. We therefore develop an on-line dynamic pricing algorithm that can adjust the price schedule for a good or service on behalf of a firm. This algorithm (a

  8. Online Algorithms for Parallel Job Scheduling and Strip Packing

    NARCIS (Netherlands)

    Hurink, Johann L.; Paulus, J.J.

    We consider the online scheduling problem of parallel jobs on parallel machines, $P|online{−}list,m_j |C_{max}$. For this problem we present a 6.6623-competitive algorithm. This improves the best known 7-competitive algorithm for this problem. The presented algorithm also applies to the problem

  9. Quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia

    2003-01-01

    Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O(√(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms

  10. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  11. On-line Certification for All: The PINVOX Algorithm

    Directory of Open Access Journals (Sweden)

    E Canessa

    2012-09-01

    Full Text Available A protoype algorithm: PINVOX (“Personal Identification Number by Voice" for on-line certification is introduced to guarantee that scholars have followed, i.e., listened and watched, a complete recorded lecture with the option of earning a certificate or diploma of completion after remotely attending courses. It is based on the injection of unique, randomly selected and pre-recorded integer numbers (or single letters or words within the audio trace of a video stream at places where silence is automatically detected. The certificate of completion or “virtual attendance” is generated on-the-fly after the successful identification of the embedded PINVOX code by a video viewer student.

  12. Universal Algorithm for Online Trading Based on the Method of Calibration

    OpenAIRE

    V'yugin, Vladimir; Trunov, Vladimir

    2012-01-01

    We present a universal algorithm for online trading in Stock Market which performs asymptotically at least as good as any stationary trading strategy that computes the investment at each step using a fixed function of the side information that belongs to a given RKHS (Reproducing Kernel Hilbert Space). Using a universal kernel, we extend this result for any continuous stationary strategy. In this learning process, a trader rationally chooses his gambles using predictions made by a randomized ...

  13. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  14. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  15. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach

  16. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  17. The relationship between randomness and power-law distributed move lengths in random walk algorithms

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2014-05-01

    Recently, we proposed a new random walk algorithm, termed the REV algorithm, in which the agent alters the directional rule that governs it using the most recent four random numbers. Here, we examined how a non-bounded number, i.e., "randomness" regarding move direction, was important for optimal searching and power-law distributed step lengths in rule change. We proposed two algorithms: the REV and REV-bounded algorithms. In the REV algorithm, one of the four random numbers used to change the rule is non-bounded. In contrast, all four random numbers in the REV-bounded algorithm are bounded. We showed that the REV algorithm exhibited more consistent power-law distributed step lengths and flexible searching behavior.

  18. Dynamically Predicting the Quality of Service: Batch, Online, and Hybrid Algorithms

    Directory of Open Access Journals (Sweden)

    Ya Chen

    2017-01-01

    Full Text Available This paper studies the problem of dynamically modeling the quality of web service. The philosophy of designing practical web service recommender systems is delivered in this paper. A general system architecture for such systems continuously collects the user-service invocation records and includes both an online training module and an offline training module for quality prediction. In addition, we introduce matrix factorization-based online and offline training algorithms based on the gradient descent algorithms and demonstrate the fitness of this online/offline algorithm framework to the proposed architecture. The superiority of the proposed model is confirmed by empirical studies on a real-life quality of web service data set and comparisons with existing web service recommendation algorithms.

  19. Decoherence in optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This paper investigates the effects of decoherence generated by broken-link-type noise in the hypercube on an optimized quantum random-walk search algorithm. When the hypercube occurs with random broken links, the optimized quantum random-walk search algorithm with decoherence is depicted through defining the shift operator which includes the possibility of broken links. For a given database size, we obtain the maximum success rate of the algorithm and the required number of iterations through numerical simulations and analysis when the algorithm is in the presence of decoherence. Then the computational complexity of the algorithm with decoherence is obtained. The results show that the ultimate effect of broken-link-type decoherence on the optimized quantum random-walk search algorithm is negative. (paper)

  20. On-line reconstruction algorithms for the CBM and ALICE experiments

    International Nuclear Information System (INIS)

    Gorbunov, Sergey

    2013-01-01

    This thesis presents various algorithms which have been developed for on-line event reconstruction in the CBM experiment at GSI, Darmstadt and the ALICE experiment at CERN, Geneve. Despite the fact that the experiments are different - CBM is a fixed target experiment with forward geometry, while ALICE has a typical collider geometry - they share common aspects when reconstruction is concerned. The thesis describes: - general modifications to the Kalman filter method, which allows one to accelerate, to improve, and to simplify existing fit algorithms; - developed algorithms for track fit in CBM and ALICE experiment, including a new method for track extrapolation in non-homogeneous magnetic field. - developed algorithms for primary and secondary vertex fit in the both experiments. In particular, a new method of reconstruction of decayed particles is presented. - developed parallel algorithm for the on-line tracking in the CBM experiment. - developed parallel algorithm for the on-line tracking in High Level Trigger of the ALICE experiment. - the realisation of the track finders on modern hardware, such as SIMD CPU registers and GPU accelerators. All the presented methods have been developed by or with the direct participation of the author.

  1. Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features

    Science.gov (United States)

    Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios

    2018-04-01

    We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.

  2. QUEST : Eliminating online supervised learning for efficient classification algorithms

    NARCIS (Netherlands)

    Zwartjes, Ardjan; Havinga, Paul J.M.; Smit, Gerard J.M.; Hurink, Johann L.

    2016-01-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting

  3. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    International Nuclear Information System (INIS)

    Elçi, Eren Metin; Weigel, Martin

    2014-01-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  4. On-Line Algorithms and Reverse Mathematics

    Science.gov (United States)

    Harris, Seth

    In this thesis, we classify the reverse-mathematical strength of sequential problems. If we are given a problem P of the form ∀X(alpha(X) → ∃Zbeta(X,Z)) then the corresponding sequential problem, SeqP, asserts the existence of infinitely many solutions to P: ∀X(∀nalpha(Xn) → ∃Z∀nbeta(X n,Zn)). P is typically provable in RCA0 if all objects involved are finite. SeqP, however, is only guaranteed to be provable in ACA0. In this thesis we exactly characterize which sequential problems are equivalent to RCA0, WKL0, or ACA0.. We say that a problem P is solvable by an on-line algorithm if P can be solved according to a two-player game, played by Alice and Bob, in which Bob has a winning strategy. Bob wins the game if Alice's sequence of plays 〈a0, ..., ak〉 and Bob's sequence of responses 〈 b0, ..., bk〉 constitute a solution to P. Formally, an on-line algorithm A is a function that inputs an admissible sequence of plays 〈a 0, b0, ..., aj〉 and outputs a new play bj for Bob. (This differs from the typical definition of "algorithm", though quite often a concrete set of instructions can be easily deduced from A.). We show that SeqP is provable in RCA0 precisely when P is solvable by an on-line algorithm. Schmerl proved this result specifically for the graph coloring problem; we generalize Schmerl's result to any problem that is on-line solvable. To prove our separation, we introduce a principle called Predictk(r) that is equivalent to -WKL0 for standard k, r.. We show that WKL0 is sufficient to prove SeqP precisely when P has a solvable closed kernel. This means that a solution exists, and each initial segment of this solution is a solution to the corresponding initial segment of the problem. (Certain bounding conditions are necessary as well.) If no such solution exists, then SeqP is equivalent to ACA0 over RCA 0 + ISigma02; RCA0 alone suffices if only sequences of standard length are considered. We use different techniques from Schmerl to prove

  5. Parallel algorithms for online trackfinding at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  6. Belief-node Condensation for Online POMDP Algorithms

    CSIR Research Space (South Africa)

    Rens, G

    2013-09-01

    Full Text Available with an online POMDP algorithm. An interesting/unexpected result is that on avg, more rewards are gained as the system’s dynamism increases. This could be explained by the possibility that the agent has to travel more when the items are stationary; the items tend... rewards returned and running-time (i.e., reactivity) for different levels of dynamism in the environment. Through experiments, we show that some of the condensation methods make algorithms significantly more effective. The paper is organized as follows...

  7. Text Clustering Algorithm Based on Random Cluster Core

    Directory of Open Access Journals (Sweden)

    Huang Long-Jun

    2016-01-01

    Full Text Available Nowadays clustering has become a popular text mining algorithm, but the huge data can put forward higher requirements for the accuracy and performance of text mining. In view of the performance bottleneck of traditional text clustering algorithm, this paper proposes a text clustering algorithm with random features. This is a kind of clustering algorithm based on text density, at the same time using the neighboring heuristic rules, the concept of random cluster is introduced, which effectively reduces the complexity of the distance calculation.

  8. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  9. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    Science.gov (United States)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random

  10. Effects of Random Values for Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hou-Ping Dai

    2018-02-01

    Full Text Available Particle swarm optimization (PSO algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 , are respectively used in the standard PSO and linear decreasing inertia weight (LDIW PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100. The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.

  11. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms.

    Science.gov (United States)

    Zwartjes, Ardjan; Havinga, Paul J M; Smit, Gerard J M; Hurink, Johann L

    2016-10-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.

  12. Verification test for on-line diagnosis algorithm based on noise analysis

    International Nuclear Information System (INIS)

    Tamaoki, T.; Naito, N.; Tsunoda, T.; Sato, M.; Kameda, A.

    1980-01-01

    An on-line diagnosis algorithm was developed and its verification test was performed using a minicomputer. This algorithm identifies the plant state by analyzing various system noise patterns, such as power spectral densities, coherence functions etc., in three procedure steps. Each obtained noise pattern is examined by using the distances from its reference patterns prepared for various plant states. Then, the plant state is identified by synthesizing each result with an evaluation weight. This weight is determined automatically from the reference noise patterns prior to on-line diagnosis. The test was performed with 50 MW (th) Steam Generator noise data recorded under various controller parameter values. The algorithm performance was evaluated based on a newly devised index. The results obtained with one kind of weight showed the algorithm efficiency under the proper selection of noise patterns. Results for another kind of weight showed the robustness of the algorithm to this selection. (orig.)

  13. Non-Linguistic Vocal Event Detection Using Online Random

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    areas such as object detection, face recognition, and audio event detection. This paper proposes to use online random forest technique for detecting laughter and filler and for analyzing the importance of various features for non-linguistic vocal event classification through permutation. The results...... show that according to the Area Under Curve measure the online random forest achieved 88.1% compared to 82.9% obtained by the baseline support vector machines for laughter classification and 86.8% to 83.6% for filler classification....

  14. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    Science.gov (United States)

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  15. Online algorithms for optimal energy distribution in microgrids

    CERN Document Server

    Wang, Yu; Nelms, R Mark

    2015-01-01

    Presenting an optimal energy distribution strategy for microgrids in a smart grid environment, and featuring a detailed analysis of the mathematical techniques of convex optimization and online algorithms, this book provides readers with essential content on how to achieve multi-objective optimization that takes into consideration power subscribers, energy providers and grid smoothing in microgrids. Featuring detailed theoretical proofs and simulation results that demonstrate and evaluate the correctness and effectiveness of the algorithm, this text explains step-by-step how the problem can b

  16. Performance optimization of an online retailer by a unique online resilience engineering algorithm

    Science.gov (United States)

    Azadeh, A.; Salehi, V.; Salehi, R.; Hassani, S. M.

    2018-03-01

    Online shopping has become more attractive and competitive in electronic markets. Resilience engineering (RE) can help such systems divert to the normal state in case of encountering unexpected events. This study presents a unique online resilience engineering (ORE) approach for online shopping systems and customer service performance. Moreover, this study presents a new ORE algorithm for the performance optimisation of an actual online shopping system. The data are collected by standard questionnaires from both expert employees and customers. The problem is then formulated mathematically using data envelopment analysis (DEA). The results show that the design process which is based on ORE is more efficient than the conventional design approach. Moreover, on-time delivery is the most important factor from the personnel's perspective. In addition, according to customers' view, trust, security and good quality assurance are the most effective factors during transactions. This is the first study that introduces ORE for electronic markets. Second, it investigates impact of RE on online shopping through DEA and statistical methods. Third, a practical approach is employed in this study and it may be used for similar online shops. Fourth, the results are verified and validated through complete sensitivity analysis.

  17. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms

    Directory of Open Access Journals (Sweden)

    Ardjan Zwartjes

    2016-10-01

    Full Text Available In this work, we introduce QUEST (QUantile Estimation after Supervised Training, an adaptive classification algorithm for Wireless Sensor Networks (WSNs that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.

  18. Reinforcement Learning for Online Control of Evolutionary Algorithms

    NARCIS (Netherlands)

    Eiben, A.; Horvath, Mark; Kowalczyk, Wojtek; Schut, Martijn

    2007-01-01

    The research reported in this paper is concerned with assessing the usefulness of reinforcment learning (RL) for on-line calibration of parameters in evolutionary algorithms (EA). We are running an RL procedure and the EA simultaneously and the RL is changing the EA parameters on-the-fly. We

  19. Online Normalization Algorithm for Engine Turbofan Monitoring

    Science.gov (United States)

    2014-10-02

    Online Normalization Algorithm for Engine Turbofan Monitoring Jérôme Lacaille 1 , Anastasios Bellas 2 1 Snecma, 77550 Moissy-Cramayel, France...understand the behavior of a turbofan engine, one first needs to deal with the variety of data acquisition contexts. Each time a set of measurements is...it auto-adapts itself with piecewise linear models. 1. INTRODUCTION Turbofan engine abnormality diagnosis uses three steps: reduction of

  20. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  1. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  2. An optimal algorithm for preemptive on-line scheduling

    NARCIS (Netherlands)

    Chen, B.; Vliet, van A.; Woeginger, G.J.

    1995-01-01

    We investigate the problem of on-line scheduling jobs on m identical parallel machines where preemption is allowed. The goal is to minimize the makespan. We derive an approximation algorithm with worst-case guarantee mm/(mm - (m - 1)m) for every m 2, which increasingly tends to e/(e - 1) ˜ 1.58 as m

  3. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  4. A comparison of performance measures for online algorithms

    DEFF Research Database (Denmark)

    Boyar, Joan; Irani, Sandy; Larsen, Kim Skak

    2009-01-01

    is to balance greediness and adaptability. We examine how these measures evaluate the Greedy Algorithm and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order...... Analysis and determine how they compare the two algorithms. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the better algorithm. Under the other measures Lazy Double Coverage is better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Our results also...... provide the first proof of optimality of an algorithm under Relative Worst Order Analysis....

  5. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip; Konečný , Jakub; Loizou, Nicolas; Richtarik, Peter; Grishchenko, Dmitry

    2017-01-01

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  6. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip

    2017-06-23

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  7. A fast ergodic algorithm for generating ensembles of equilateral random polygons

    Science.gov (United States)

    Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.

    2009-03-01

    Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.

  8. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  9. GPU implementations of online track finding algorithms at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas; Stockmanns, Tobias; Ritman, James [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH (Germany); Adinetz, Andrew; Pleiter, Dirk [Juelich Supercomputing Centre, Forschungszentrum Juelich GmbH (Germany); Kraus, Jiri [NVIDIA GmbH (Germany); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA experiment is a hadron physics experiment that will investigate antiproton annihilation in the charm quark mass region. The experiment is now being constructed as one of the main parts of the FAIR facility. At an event rate of 2 . 10{sup 7}/s a data rate of 200 GB/s is expected. A reduction of three orders of magnitude is required in order to save the data for further offline analysis. Since signal and background processes at PANDA have similar signatures, no hardware-level trigger is foreseen for the experiment. Instead, a fast online event filter is substituting this element. We investigate the possibility of using graphics processing units (GPUs) for the online tracking part of this task. Researched algorithms are a Hough Transform, a track finder involving Riemann surfaces, and the novel, PANDA-specific Triplet Finder. This talk shows selected advances in the implementations as well as performance evaluations of the GPU tracking algorithms to be used at the PANDA experiment.

  10. Online Testing of Real-time Systems Using Uppaal

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Mikucionis, Marius; Nielsen, Brian

    2005-01-01

    We present T-Uppaal{} -- a new tool for online black-box testing of real-time embedded systems from non-deterministic timed automata specifications. We describe a sound and complete randomized online testing algorithm and how to implement it using symbolic state representation and manipulation te...

  11. A polylogarithmic competitive algorithm for the k-server problem

    NARCIS (Netherlands)

    Bansal, N.; Buchbinder, N.; Madry, A.; Naor, J.

    2011-01-01

    We give the first polylogarithmic-competitive randomized online algorithm for the $k$-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of O(log^3 n log^2 k log log n) for any metric space on n points. Our algorithm improves upon the

  12. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Ahunbay, E; Li, X [Medical College of Wisconsin, Milwaukee, WI (United States); Moreau, M [Elekta, Inc, Verona, WI (Italy)

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.

  13. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    International Nuclear Information System (INIS)

    Ahunbay, E; Li, X; Moreau, M

    2014-01-01

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams

  14. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  15. FPGA Online Tracking Algorithm for the PANDA Straw Tube Tracker

    Science.gov (United States)

    Liang, Yutie; Ye, Hua; Galuska, Martin J.; Gessler, Thomas; Kuhn, Wolfgang; Lange, Jens Soren; Wagner, Milan N.; Liu, Zhen'an; Zhao, Jingzhou

    2017-06-01

    A novel FPGA based online tracking algorithm for helix track reconstruction in a solenoidal field, developed for the PANDA spectrometer, is described. Employing the Straw Tube Tracker detector with 4636 straw tubes, the algorithm includes a complex track finder, and a track fitter. Implemented in VHDL, the algorithm is tested on a Xilinx Virtex-4 FX60 FPGA chip with different types of events, at different event rates. A processing time of 7 $\\mu$s per event for an average of 6 charged tracks is obtained. The momentum resolution is about 3\\% (4\\%) for $p_t$ ($p_z$) at 1 GeV/c. Comparing to the algorithm running on a CPU chip (single core Intel Xeon E5520 at 2.26 GHz), an improvement of 3 orders of magnitude in processing time is obtained. The algorithm can handle severe overlapping of events which are typical for interaction rates above 10 MHz.

  16. Research on machine learning framework based on random forest algorithm

    Science.gov (United States)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  17. Ecodriver. D23.2: Simulation and analysis document for on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Orfila, O.; Saintpierre, G.

    2014-01-01

    This deliverable reports on the simulations and analysis of the on-line vehicle algorithms as well as the ecoDriver Android application. The simulation and field test results give an impression of how the algorithms will perform in the real world trials in SP3. Thus, it is possible to apply

  18. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  19. Randomized Controlled Trial of Online Acceptance and Commitment Therapy for Fibromyalgia.

    Science.gov (United States)

    Simister, Heather D; Tkachuk, Gregg A; Shay, Barbara L; Vincent, Norah; Pear, Joseph J; Skrabek, Ryan Q

    2018-03-02

    In this study, 67 participants (95% female) with fibromyalgia (FM) were randomly assigned to an online acceptance and commitment therapy (online ACT) and treatment as usual (TAU; ACT + TAU) protocol or a TAU control condition. Online ACT + TAU participants were asked to complete 7 modules over an 8-week period. Assessments were completed at pre-treatment, post-treatment, and 3-month follow-up periods and included measures of FM impact (primary outcome), depression, pain, sleep, 6-minute walk, sit to stand, pain acceptance (primary process variable), mindfulness, cognitive fusion, valued living, kinesiophobia, and pain catastrophizing. The results indicated that online ACT + TAU participants significantly improved in FM impact, relative to TAU (P online ACT + TAU were also found on measures of depression (P = .02), pain (P = .01), and kinesiophobia (P = .001). Although preliminary, this study highlights the potential for online ACT to be an efficacious, accessible, and cost-effective treatment for people with FM and other chronic pain conditions. Online ACT reduced FM impact relative to a TAU control condition in this randomized controlled trial. Reductions in FM impact were mediated by improvements in pain acceptance. Online ACT appears to be a promising intervention for FM. Copyright © 2018 The American Pain Society. Published by Elsevier Inc. All rights reserved.

  20. Subexponential lower bounds for randomized pivoting rules for the simplex algorithm

    DEFF Research Database (Denmark)

    Friedmann, Oliver; Hansen, Thomas Dueholm; Zwick, Uri

    2011-01-01

    The simplex algorithm is among the most widely used algorithms for solving linear programs in practice. With essentially all deterministic pivoting rules it is known, however, to require an exponential number of steps to solve some linear programs. No non-polynomial lower bounds were known, prior...... to this work, for randomized pivoting rules. We provide the first subexponential (i.e., of the form 2Ω(nα), for some α>0) lower bounds for the two most natural, and most studied, randomized pivoting rules suggested to date. The first randomized pivoting rule considered is Random-Edge, which among all improving...... pivoting steps (or edges) from the current basic feasible solution (or vertex) chooses one uniformly at random. The second randomized pivoting rule considered is Random-Facet, a more complicated randomized pivoting rule suggested by Kalai and by Matousek, Sharir and Welzl. Our lower bound for the Random...

  1. An Event-Triggered Online Energy Management Algorithm of Smart Home: Lyapunov Optimization Approach

    Directory of Open Access Journals (Sweden)

    Wei Fan

    2016-05-01

    Full Text Available As an important component of the smart grid on the user side, a home energy management system is the core of optimal operation for a smart home. In this paper, the energy scheduling problem for a household equipped with photovoltaic devices was investigated. An online energy management algorithm based on event triggering was proposed. The Lyapunov optimization method was adopted to schedule controllable load in the household. Without forecasting related variables, real-time decisions were made based only on the current information. Energy could be rapidly regulated under the fluctuation of distributed generation, electricity demand and market price. The event-triggering mechanism was adopted to trigger the execution of the online algorithm, so as to cut down the execution frequency and unnecessary calculation. A comprehensive result obtained from simulation shows that the proposed algorithm could effectively decrease the electricity bills of users. Moreover, the required computational resource is small, which contributes to the low-cost energy management of a smart home.

  2. Analysis of Online DBA Algorithm with Adaptive Sleep Cycle in WDM EPON

    Science.gov (United States)

    Pajčin, Bojan; Matavulj, Petar; Radivojević, Mirjana

    2018-05-01

    In order to manage Quality of Service (QoS) and energy efficiency in the optical access network, an online Dynamic Bandwidth Allocation (DBA) algorithm with adaptive sleep cycle is presented. This DBA algorithm has the ability to allocate an additional bandwidth to the end user within a single sleep cycle whose duration changes depending on the current buffers occupancy. The purpose of this DBA algorithm is to tune the duration of the sleep cycle depending on the network load in order to provide service to the end user without violating strict QoS requests in all network operating conditions.

  3. Recruitment to Online Therapies for Depression: Pilot Cluster Randomized Controlled Trial

    OpenAIRE

    Jones, Ray B; Goldsmith, Lesley; Hewson, Paul; Williams, Christopher J

    2013-01-01

    Background Raising awareness of online cognitive behavioral therapy (CBT) could benefit many people with depression, but we do not know how purchasing online advertising compares to placing free links from relevant local websites in increasing uptake. Objective To pilot a cluster randomized controlled trial (RCT) comparing purchase of Google AdWords with placing free website links in raising awareness of online CBT resources for depression in order to better understand research design issues....

  4. Genetic Algorithm and Graph Theory Based Matrix Factorization Method for Online Friend Recommendation

    Directory of Open Access Journals (Sweden)

    Qu Li

    2014-01-01

    Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.

  5. PREDICTIVE CONTROL OF A BATCH POLYMERIZATION SYSTEM USING A FEEDFORWARD NEURAL NETWORK WITH ONLINE ADAPTATION BY GENETIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. Cancelier

    Full Text Available Abstract This study used a predictive controller based on an empirical nonlinear model comprising a three-layer feedforward neural network for temperature control of the suspension polymerization process. In addition to the offline training technique, an algorithm was also analyzed for online adaptation of its parameters. For the offline training, the network was statically trained and the genetic algorithm technique was used in combination with the least squares method. For online training, the network was trained on a recurring basis and only the technique of genetic algorithms was used. In this case, only the weights and bias of the output layer neuron were modified, starting from the parameters obtained from the offline training. From the experimental results obtained in a pilot plant, a good performance was observed for the proposed control system, with superior performance for the control algorithm with online adaptation of the model, particularly with respect to the presence of off-set for the case of the fixed parameters model.

  6. Generating log-normally distributed random numbers by using the Ziggurat algorithm

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2016-01-01

    Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method

  7. Online Identification of Photovoltaic Source Parameters by Using a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Giovanni Petrone

    2017-12-01

    Full Text Available In this paper, an efficient method for the online identification of the photovoltaic single-diode model parameters is proposed. The combination of a genetic algorithm with explicit equations allows obtaining precise results without the direct measurement of short circuit current and open circuit voltage that is typically used in offline identification methods. Since the proposed method requires only voltage and current values close to the maximum power point, it can be easily integrated into any photovoltaic system, and it operates online without compromising the power production. The proposed approach has been implemented and tested on an embedded system, and it exhibits a good performance for monitoring/diagnosis applications.

  8. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially...... achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...

  9. Recommendation in evolving online networks

    Science.gov (United States)

    Hu, Xiao; Zeng, An; Shang, Ming-Sheng

    2016-02-01

    Recommender system is an effective tool to find the most relevant information for online users. By analyzing the historical selection records of users, recommender system predicts the most likely future links in the user-item network and accordingly constructs a personalized recommendation list for each user. So far, the recommendation process is mostly investigated in static user-item networks. In this paper, we propose a model which allows us to examine the performance of the state-of-the-art recommendation algorithms in evolving networks. We find that the recommendation accuracy in general decreases with time if the evolution of the online network fully depends on the recommendation. Interestingly, some randomness in users' choice can significantly improve the long-term accuracy of the recommendation algorithm. When a hybrid recommendation algorithm is applied, we find that the optimal parameter gradually shifts towards the diversity-favoring recommendation algorithm, indicating that recommendation diversity is essential to keep a high long-term recommendation accuracy. Finally, we confirm our conclusions by studying the recommendation on networks with the real evolution data.

  10. A Novel Algorithm of Quantum Random Walk in Server Traffic Control and Task Scheduling

    Directory of Open Access Journals (Sweden)

    Dong Yumin

    2014-01-01

    Full Text Available A quantum random walk optimization model and algorithm in network cluster server traffic control and task scheduling is proposed. In order to solve the problem of server load balancing, we research and discuss the distribution theory of energy field in quantum mechanics and apply it to data clustering. We introduce the method of random walk and illuminate what the quantum random walk is. Here, we mainly research the standard model of one-dimensional quantum random walk. For the data clustering problem of high dimensional space, we can decompose one m-dimensional quantum random walk into m one-dimensional quantum random walk. In the end of the paper, we compare the quantum random walk optimization method with GA (genetic algorithm, ACO (ant colony optimization, and SAA (simulated annealing algorithm. In the same time, we prove its validity and rationality by the experiment of analog and simulation.

  11. Semi-online preemptive scheduling: one algorithm for all variants

    Czech Academy of Sciences Publication Activity Database

    Ebenlendr, Tomáš; Sgall, J.

    2011-01-01

    Roč. 48, č. 3 (2011), s. 577-613 ISSN 1432-4350. [26th International Symposium on Theoretical Aspects of Computer Science. Freiburg, 26.02.2009-28.02.2009] R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : online algorithms * scheduling * preemption * linear program Subject RIV: BA - General Mathematics Impact factor: 0.442, year: 2011 http://www.springerlink.com/content/k66u6tv1l7731654/

  12. Control and monitoring of On-line Trigger Algorithms using gaucho

    CERN Document Server

    Van Herwijnen, Eric

    2005-01-01

    In the LHCb experiment, the trigger decisions are computed by Gaudi (the LHCb software framework) algorithms running on an event filter farm of around 2000 PCs. The control and monitoring of these algorithms has to be integrated in the overall experiment control system (ECS). To enable and facilitate this integration Gaucho, the GAUdi Component Helping Online, was developed. Gaucho consists of three parts: a C++ package integrated with Gaudi, the communications package DIM, and a set of PVSS panels and libraries. PVSS is a commercial SCADA system chosen as toolkit and framework for the LHCb controls system. The C++ package implements monitor service interface (IMonitorSvc) following the Gaudi specifications, with methods to declare variables and histograms for monitoring. Algorithms writers use them to indicate which quantities should be monitored. Since the interface resides in the GaudiKernel the code does not need changing if the monitoring services are not present. The Gaudi main job implements a state ma...

  13. An Online Tilt Estimation and Compensation Algorithm for a Small Satellite Camera

    Science.gov (United States)

    Lee, Da-Hyun; Hwang, Jai-hyuk

    2018-04-01

    In the case of a satellite camera designed to execute an Earth observation mission, even after a pre-launch precision alignment process has been carried out, misalignment will occur due to external factors during the launch and in the operating environment. In particular, for high-resolution satellite cameras, which require submicron accuracy for alignment between optical components, misalignment is a major cause of image quality degradation. To compensate for this, most high-resolution satellite cameras undergo a precise realignment process called refocusing before and during the operation process. However, conventional Earth observation satellites only execute refocusing upon de-space. Thus, in this paper, an online tilt estimation and compensation algorithm that can be utilized after de-space correction is executed. Although the sensitivity of the optical performance degradation due to the misalignment is highest in de-space, the MTF can be additionally increased by correcting tilt after refocusing. The algorithm proposed in this research can be used to estimate the amount of tilt that occurs by taking star images, and it can also be used to carry out automatic tilt corrections by employing a compensation mechanism that gives angular motion to the secondary mirror. Crucially, this algorithm is developed using an online processing system so that it can operate without communication with the ground.

  14. Effects of a random noisy oracle on search algorithm complexity

    International Nuclear Information System (INIS)

    Shenvi, Neil; Brown, Kenneth R.; Whaley, K. Birgitta

    2003-01-01

    Grover's algorithm provides a quadratic speed-up over classical algorithms for unstructured database or library searches. This paper examines the robustness of Grover's search algorithm to a random phase error in the oracle and analyzes the complexity of the search process as a function of the scaling of the oracle error with database or library size. Both the discrete- and continuous-time implementations of the search algorithm are investigated. It is shown that unless the oracle phase error scales as O(N -1/4 ), neither the discrete- nor the continuous-time implementation of Grover's algorithm is scalably robust to this error in the absence of error correction

  15. PREDICTIVE CONTROL OF A BATCH POLYMERIZATION SYSTEM USING A FEEDFORWARD NEURAL NETWORK WITH ONLINE ADAPTATION BY GENETIC ALGORITHM

    OpenAIRE

    Cancelier, A.; Claumann, C. A.; Bolzan, A.; Machado, R. A. F.

    2016-01-01

    Abstract This study used a predictive controller based on an empirical nonlinear model comprising a three-layer feedforward neural network for temperature control of the suspension polymerization process. In addition to the offline training technique, an algorithm was also analyzed for online adaptation of its parameters. For the offline training, the network was statically trained and the genetic algorithm technique was used in combination with the least squares method. For online training, ...

  16. Online distribution channel increases article usage on Mendeley: a randomized controlled trial.

    Science.gov (United States)

    Kudlow, Paul; Cockerill, Matthew; Toccalino, Danielle; Dziadyk, Devin Bissky; Rutledge, Alan; Shachak, Aviv; McIntyre, Roger S; Ravindran, Arun; Eysenbach, Gunther

    2017-01-01

    Prior research shows that article reader counts (i.e. saves) on the online reference manager, Mendeley, correlate to future citations. There are currently no evidenced-based distribution strategies that have been shown to increase article saves on Mendeley. We conducted a 4-week randomized controlled trial to examine how promotion of article links in a novel online cross-publisher distribution channel (TrendMD) affect article saves on Mendeley. Four hundred articles published in the Journal of Medical Internet Research were randomized to either the TrendMD arm ( n  = 200) or the control arm ( n  = 200) of the study. Our primary outcome compares the 4-week mean Mendeley saves of articles randomized to TrendMD versus control. Articles randomized to TrendMD showed a 77% increase in article saves on Mendeley relative to control. The difference in mean Mendeley saves for TrendMD articles versus control was 2.7, 95% CI (2.63, 2.77), and statistically significant ( p  < 0.01). There was a positive correlation between pageviews driven by TrendMD and article saves on Mendeley (Spearman's rho r  = 0.60). This is the first randomized controlled trial to show how an online cross-publisher distribution channel (TrendMD) enhances article saves on Mendeley. While replication and further study are needed, these data suggest that cross-publisher article recommendations via TrendMD may enhance citations of scholarly articles.

  17. Context-Aware Online Commercial Intention Detection

    Science.gov (United States)

    Hu, Derek Hao; Shen, Dou; Sun, Jian-Tao; Yang, Qiang; Chen, Zheng

    With more and more commercial activities moving onto the Internet, people tend to purchase what they need through Internet or conduct some online research before the actual transactions happen. For many Web users, their online commercial activities start from submitting a search query to search engines. Just like the common Web search queries, the queries with commercial intention are usually very short. Recognizing the queries with commercial intention against the common queries will help search engines provide proper search results and advertisements, help Web users obtain the right information they desire and help the advertisers benefit from the potential transactions. However, the intentions behind a query vary a lot for users with different background and interest. The intentions can even be different for the same user, when the query is issued in different contexts. In this paper, we present a new algorithm framework based on skip-chain conditional random field (SCCRF) for automatically classifying Web queries according to context-based online commercial intention. We analyze our algorithm performance both theoretically and empirically. Extensive experiments on several real search engine log datasets show that our algorithm can improve more than 10% on F1 score than previous algorithms on commercial intention detection.

  18. Ecodriver. D23.1: Report on test scenarios for val-idation of on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Ivens, T.W.T.; Spronkmans, S.

    2014-01-01

    This deliverable provides a description of test scenarios that will be used for validation of WP22’s on-line vehicle algorithms. These algorithms consist of the two modules VE³ (Vehicle Energy and Environment Estimator) and RSG (Reference Signal Genera-tor) and will be tested using the

  19. Flow, transport and diffusion in random geometries I: a MLMC algorithm

    KAUST Repository

    Canuto, Claudio; Hoel, Haakon; Icardi, Matteo; Quadrio, Nathan; Tempone, Raul

    2015-01-01

    -purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random geoemtry and with random parameters. We make use of the key idea of MLMC, based on different discretization levels, extending it in a more general context

  20. Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR

    Science.gov (United States)

    Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz, A.; Kraus, J.; Pleiter, D.

    2015-12-01

    P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.

  1. Random neural Q-learning for obstacle avoidance of a mobile robot in unknown environments

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2016-07-01

    Full Text Available The article presents a random neural Q-learning strategy for the obstacle avoidance problem of an autonomous mobile robot in unknown environments. In the proposed strategy, two independent modules, namely, avoidance without considering the target and goal-seeking without considering obstacles, are first trained using the proposed random neural Q-learning algorithm to obtain their best control policies. Then, the two trained modules are combined based on a switching function to realize the obstacle avoidance in unknown environments. For the proposed random neural Q-learning algorithm, a single-hidden layer feedforward network is used to approximate the Q-function to estimate the Q-value. The parameters of the single-hidden layer feedforward network are modified using the recently proposed neural algorithm named the online sequential version of extreme learning machine, where the parameters of the hidden nodes are assigned randomly and the sample data can come one by one. However, different from the original online sequential version of extreme learning machine algorithm, the initial output weights are estimated subjected to quadratic inequality constraint to improve the convergence speed. Finally, the simulation results demonstrate that the proposed random neural Q-learning strategy can successfully solve the obstacle avoidance problem. Also, the higher learning efficiency and better generalization ability are achieved by the proposed random neural Q-learning algorithm compared with the Q-learning based on the back-propagation method.

  2. Human tracking in thermal images using adaptive particle filters with online random forest learning

    Science.gov (United States)

    Ko, Byoung Chul; Kwak, Joon-Young; Nam, Jae-Yeal

    2013-11-01

    This paper presents a fast and robust human tracking method to use in a moving long-wave infrared thermal camera under poor illumination with the existence of shadows and cluttered backgrounds. To improve the human tracking performance while minimizing the computation time, this study proposes an online learning of classifiers based on particle filters and combination of a local intensity distribution (LID) with oriented center-symmetric local binary patterns (OCS-LBP). Specifically, we design a real-time random forest (RF), which is the ensemble of decision trees for confidence estimation, and confidences of the RF are converted into a likelihood function of the target state. First, the target model is selected by the user and particles are sampled. Then, RFs are generated using the positive and negative examples with LID and OCS-LBP features by online learning. The learned RF classifiers are used to detect the most likely target position in the subsequent frame in the next stage. Then, the RFs are learned again by means of fast retraining with the tracked object and background appearance in the new frame. The proposed algorithm is successfully applied to various thermal videos as tests and its tracking performance is better than those of other methods.

  3. Comparison of feature and classifier algorithms for online automatic sleep staging based on a single EEG signal

    NARCIS (Netherlands)

    Radha, M.; Garcia Molina, G.; Poel, M.; Tononi, G.

    2014-01-01

    Automatic sleep staging on an online basis has recently emerged as a research topic motivated by fundamental sleep research. The aim of this paper is to find optimal signal processing methods and machine learning algorithms to achieve online sleep staging on the basis of a single EEG signal. The

  4. Algorithmic randomness, physical entropy, measurements, and the second law

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic information content is equal to the size -- in the number of bits -- of the shortest program for a universal Turing machine which can reproduce a state of a physical system. In contrast to the statistical Boltzmann-Gibbs-Shannon entropy, which measures ignorance, the algorithmic information content is a measure of the available information. It is defined without a recourse to probabilities and can be regarded as a measure of randomness of a definite microstate. I suggest that the physical entropy S -- that is, the quantity which determines the amount of the work ΔW which can be extracted in the cyclic isothermal expansion process through the equation ΔW = k B TΔS -- is a sum of two contributions: the mission information measured by the usual statistical entropy and the known randomness measured by the algorithmic information content. The sum of these two contributions is a ''constant of motion'' in the process of a dissipation less measurement on an equilibrium ensemble. This conservation under a measurement, which can be traced back to the noiseless coding theorem of Shannon, is necessary to rule out existence of a successful Maxwell's demon. 17 refs., 3 figs

  5. Simulation of quantum systems with random walks: A new algorithm for charged systems

    International Nuclear Information System (INIS)

    Ceperley, D.

    1983-01-01

    Random walks with branching have been used to calculate exact properties of the ground state of quantum many-body systems. In this paper, a more general Green's function identity is derived which relates the potential energy, a trial wavefunction, and a trial density matrix to the rules of a branched random walk. It is shown that an efficient algorithm requires a good trial wavefunction, a good trial density matrix, and a good sampling of this density matrix. An accurate density matrix is constructed for Coulomb systems using the path integral formula. The random walks from this new algorithm diffuse through phase space an order of magnitude faster than the previous Green's Function Monte Carlo method. In contrast to the simple diffusion Monte Carlo algorithm, it is exact method. Representative results are presented for several molecules

  6. An Algorithm for Online Inertia Identification and Load Torque Observation via Adaptive Kalman Observer-Recursive Least Squares

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2018-03-01

    Full Text Available In this paper, an on-line parameter identification algorithm to iteratively compute the numerical values of inertia and load torque is proposed. Since inertia and load torque are strongly coupled variables due to the degenerate-rank problem, it is hard to estimate relatively accurate values for them in the cases such as when load torque variation presents or one cannot obtain a relatively accurate priori knowledge of inertia. This paper eliminates this problem and realizes ideal online inertia identification regardless of load condition and initial error. The algorithm in this paper integrates a full-order Kalman Observer and Recursive Least Squares, and introduces adaptive controllers to enhance the robustness. It has a better performance when iteratively computing load torque and moment of inertia. Theoretical sensitivity analysis of the proposed algorithm is conducted. Compared to traditional methods, the validity of the proposed algorithm is proved by simulation and experiment results.

  7. Neural network based online simultaneous policy update algorithm for solving the HJI equation in nonlinear H∞ control.

    Science.gov (United States)

    Wu, Huai-Ning; Luo, Biao

    2012-12-01

    It is well known that the nonlinear H∞ state feedback control problem relies on the solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial differential equation that has proven to be impossible to solve analytically. In this paper, a neural network (NN)-based online simultaneous policy update algorithm (SPUA) is developed to solve the HJI equation, in which knowledge of internal system dynamics is not required. First, we propose an online SPUA which can be viewed as a reinforcement learning technique for two players to learn their optimal actions in an unknown environment. The proposed online SPUA updates control and disturbance policies simultaneously; thus, only one iterative loop is needed. Second, the convergence of the online SPUA is established by proving that it is mathematically equivalent to Newton's method for finding a fixed point in a Banach space. Third, we develop an actor-critic structure for the implementation of the online SPUA, in which only one critic NN is needed for approximating the cost function, and a least-square method is given for estimating the NN weight parameters. Finally, simulation studies are provided to demonstrate the effectiveness of the proposed algorithm.

  8. A universal algorithm to generate pseudo-random numbers based on uniform mapping as homeomorphism

    International Nuclear Information System (INIS)

    Fu-Lai, Wang

    2010-01-01

    A specific uniform map is constructed as a homeomorphism mapping chaotic time series into [0,1] to obtain sequences of standard uniform distribution. With the uniform map, a chaotic orbit and a sequence orbit obtained are topologically equivalent to each other so the map can preserve the most dynamic properties of chaotic systems such as permutation entropy. Based on the uniform map, a universal algorithm to generate pseudo random numbers is proposed and the pseudo random series is tested to follow the standard 0–1 random distribution both theoretically and experimentally. The algorithm is not complex, which does not impose high requirement on computer hard ware and thus computation speed is fast. The method not only extends the parameter spaces but also avoids the drawback of small function space caused by constraints on chaotic maps used to generate pseudo random numbers. The algorithm can be applied to any chaotic system and can produce pseudo random sequence of high quality, thus can be a good universal pseudo random number generator. (general)

  9. A universal algorithm to generate pseudo-random numbers based on uniform mapping as homeomorphism

    Science.gov (United States)

    Wang, Fu-Lai

    2010-09-01

    A specific uniform map is constructed as a homeomorphism mapping chaotic time series into [0,1] to obtain sequences of standard uniform distribution. With the uniform map, a chaotic orbit and a sequence orbit obtained are topologically equivalent to each other so the map can preserve the most dynamic properties of chaotic systems such as permutation entropy. Based on the uniform map, a universal algorithm to generate pseudo random numbers is proposed and the pseudo random series is tested to follow the standard 0-1 random distribution both theoretically and experimentally. The algorithm is not complex, which does not impose high requirement on computer hard ware and thus computation speed is fast. The method not only extends the parameter spaces but also avoids the drawback of small function space caused by constraints on chaotic maps used to generate pseudo random numbers. The algorithm can be applied to any chaotic system and can produce pseudo random sequence of high quality, thus can be a good universal pseudo random number generator.

  10. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    Science.gov (United States)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  11. Learning and anticipation in online dynamic optimization with evolutionary algorithms: The stochastic case

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); J.A. La Poutré (Han); D. Thierens (Dirk)

    2007-01-01

    htmlabstractThe focus of this paper is on how to design evolutionary algorithms (EAs) for solving stochastic dynamic optimization problems online, i.e. as time goes by. For a proper design, the EA must not only be capable of tracking shifting optima, it must also take into account the future

  12. On-line statistical processing of radiation detector pulse trains with time-varying count rates

    International Nuclear Information System (INIS)

    Apostolopoulos, G.

    2008-01-01

    Statistical analysis is of primary importance for the correct interpretation of nuclear measurements, due to the inherent random nature of radioactive decay processes. This paper discusses the application of statistical signal processing techniques to the random pulse trains generated by radiation detectors. The aims of the presented algorithms are: (i) continuous, on-line estimation of the underlying time-varying count rate θ(t) and its first-order derivative dθ/dt; (ii) detection of abrupt changes in both of these quantities and estimation of their new value after the change point. Maximum-likelihood techniques, based on the Poisson probability distribution, are employed for the on-line estimation of θ and dθ/dt. Detection of abrupt changes is achieved on the basis of the generalized likelihood ratio statistical test. The properties of the proposed algorithms are evaluated by extensive simulations and possible applications for on-line radiation monitoring are discussed

  13. Online Tracking Algorithms on GPUs for the P-barANDA Experiment at FAIR

    International Nuclear Information System (INIS)

    Bianchi, L; Herten, A; Ritman, J; Stockmanns, T; Adinetz, A.; Pleiter, D; Kraus, J

    2015-01-01

    P-barANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented. (paper)

  14. Flow, transport and diffusion in random geometries I: a MLMC algorithm

    KAUST Repository

    Canuto, Claudio

    2015-01-07

    Multilevel Monte Carlo (MLMC) is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrization of the input randomness is not available or too expensive. We propose a general-purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random geoemtry and with random parameters. We make use of the key idea of MLMC, based on different discretization levels, extending it in a more general context, making use of a hierarchy of physical resolution scales, solvers, models and other numerical/geometrical discretization parameters. Modifications of the classical MLMC estimators are proposed to further reduce variance in cases where analytical convergence rates and asymptotic regimes are not available. Spheres, ellipsoids and general convex-shaped grains are placed randomly in the domain with different placing/packing algorithms and the effective properties of the heterogeneous medium are computed. These are, for example, effective diffusivities, conductivities, and reaction rates. The implementation of the Monte-Carlo estimators, the statistical samples and each single solver is done efficiently in parallel.

  15. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    Science.gov (United States)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  16. Predicting Coastal Flood Severity using Random Forest Algorithm

    Science.gov (United States)

    Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.

    2017-12-01

    Coastal floods have become more common recently and are predicted to further increase in frequency and severity due to sea level rise. Predicting floods in coastal cities can be difficult due to the number of environmental and geographic factors which can influence flooding events. Built stormwater infrastructure and irregular urban landscapes add further complexity. This paper demonstrates the use of machine learning algorithms in predicting street flood occurrence in an urban coastal setting. The model is trained and evaluated using data from Norfolk, Virginia USA from September 2010 - October 2016. Rainfall, tide levels, water table levels, and wind conditions are used as input variables. Street flooding reports made by city workers after named and unnamed storm events, ranging from 1-159 reports per event, are the model output. Results show that Random Forest provides predictive power in estimating the number of flood occurrences given a set of environmental conditions with an out-of-bag root mean squared error of 4.3 flood reports and a mean absolute error of 0.82 flood reports. The Random Forest algorithm performed much better than Poisson regression. From the Random Forest model, total daily rainfall was by far the most important factor in flood occurrence prediction, followed by daily low tide and daily higher high tide. The model demonstrated here could be used to predict flood severity based on forecast rainfall and tide conditions and could be further enhanced using more complete street flooding data for model training.

  17. An improved label propagation algorithm based on node importance and random walk for community detection

    Science.gov (United States)

    Ma, Tianren; Xia, Zhengyou

    2017-05-01

    Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.

  18. Randomized Algorithms for Analysis and Control of Uncertain Systems With Applications

    CERN Document Server

    Tempo, Roberto; Dabbene, Fabrizio

    2013-01-01

    The presence of uncertainty in a system description has always been a critical issue in control. The main objective of Randomized Algorithms for Analysis and Control of Uncertain Systems, with Applications (Second Edition) is to introduce the reader to the fundamentals of probabilistic methods in the analysis and design of systems subject to deterministic and stochastic uncertainty. The approach propounded by this text guarantees a reduction in the computational complexity of classical  control algorithms and in the conservativeness of standard robust control techniques. The second edition has been thoroughly updated to reflect recent research and new applications with chapters on statistical learning theory, sequential methods for control and the scenario approach being completely rewritten.   Features: ·         self-contained treatment explaining Monte Carlo and Las Vegas randomized algorithms from their genesis in the principles of probability theory to their use for system analysis; ·    ...

  19. Response-only modal identification using random decrement algorithm with time-varying threshold level

    International Nuclear Information System (INIS)

    Lin, Chang Sheng; Tseng, Tse Chuan

    2014-01-01

    Modal Identification from response data only is studied for structural systems under nonstationary ambient vibration. The topic of this paper is the estimation of modal parameters from nonstationary ambient vibration data by applying the random decrement algorithm with time-varying threshold level. In the conventional random decrement algorithm, the threshold level for evaluating random dec signatures is defined as the standard deviation value of response data of the reference channel. The distortion of random dec signatures may be, however, induced by the error involved in noise from the original response data in practice. To improve the accuracy of identification, a modification of the sampling procedure in random decrement algorithm is proposed for modal-parameter identification from the nonstationary ambient response data. The time-varying threshold level is presented for the acquisition of available sample time history to perform averaging analysis, and defined as the temporal root-mean-square function of structural response, which can appropriately describe a wide variety of nonstationary behaviors in reality, such as the time-varying amplitude (variance) of a nonstationary process in a seismic record. Numerical simulations confirm the validity and robustness of the proposed modal-identification method from nonstationary ambient response data under noisy conditions.

  20. A random forest algorithm for nowcasting of intense precipitation events

    Science.gov (United States)

    Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh

    2017-09-01

    Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.

  1. A Streaming Algorithm for Online Estimation of Temporal and Spatial Extent of Delays

    Directory of Open Access Journals (Sweden)

    Kittipong Hiriotappa

    2017-01-01

    Full Text Available Knowing traffic congestion and its impact on travel time in advance is vital for proactive travel planning as well as advanced traffic management. This paper proposes a streaming algorithm to estimate temporal and spatial extent of delays online which can be deployed with roadside sensors. First, the proposed algorithm uses streaming input from individual sensors to detect a deviation from normal traffic patterns, referred to as anomalies, which is used as an early indication of delay occurrence. Then, a group of consecutive sensors that detect anomalies are used to temporally and spatially estimate extent of delay associated with the detected anomalies. Performance evaluations are conducted using a real-world data set collected by roadside sensors in Bangkok, Thailand, and the NGSIM data set collected in California, USA. Using NGSIM data, it is shown qualitatively that the proposed algorithm can detect consecutive occurrences of shockwaves and estimate their associated delays. Then, using a data set from Thailand, it is shown quantitatively that the proposed algorithm can detect and estimate delays associated with both recurring congestion and incident-induced nonrecurring congestion. The proposed algorithm also outperforms the previously proposed streaming algorithm.

  2. Online cognitive-behavioural treatment of bulimic symptoms: a randomized controlled trial.

    Science.gov (United States)

    Ruwaard, Jeroen; Lange, Alfred; Broeksteeg, Janneke; Renteria-Agirre, Aitziber; Schrieken, Bart; Dolan, Conor V; Emmelkamp, Paul

    2013-01-01

    Manualized cognitive-behavioural treatment (CBT) is underutilized in the treatment of bulimic symptoms. Internet-delivered treatment may reduce current barriers. This study aimed to assess the efficacy of a new online CBT of bulimic symptoms. Participants with bulimic symptoms (n = 105) were randomly allocated to online CBT, bibliotherapy or waiting list/delayed treatment condition. Data were gathered at pre-treatment, post-treatment and 1-year follow-up. The primary outcome measures were the Eating Disorder Examination Questionnaire (EDE-Q) and the frequency of binge eating and purging episodes. The secondary outcome measure was the Body Attitude Test. Dropout from Internet treatment was 26%. Intention-to-treat ANCOVAs of post-test data revealed that the EDE-Q scores and the frequency of binging and purging reduced more in the online CBT group compared with the bibliotherapy and waiting list groups (pooled between-group effect size: d = 0.9). At 1-year follow-up, improvements in the online CBT group had sustained. This study identifies online CBT as a viable alternative in the treatment of bulimic symptoms. Copyright © 2012 John Wiley & Sons, Ltd.

  3. A Framework To Support Management Of HIVAIDS Using K-Means And Random Forest Algorithm

    Directory of Open Access Journals (Sweden)

    Gladys Iseu

    2017-06-01

    Full Text Available Healthcare industry generates large amounts of complex data about patients hospital resources disease management electronic patient records and medical devices among others. The availability of these huge amounts of medical data creates a need for powerful mining tools to support health care professionals in diagnosis treatment and management of HIVAIDS. Several data mining techniques have been used in management of different data sets. Data mining techniques have been categorized into regression algorithms segmentation algorithms association algorithms sequence analysis algorithms and classification algorithms. In the medical field there has not been a specific study that has incorporated two or more data mining algorithms hence limiting decision making levels by medical practitioners. This study identified the extent to which K-means algorithm cluster patient characteristics it has also evaluated the extent to which random forest algorithm can classify the data for informed decision making as well as design a framework to support medical decision making in the treatment of HIVAIDS related diseases in Kenya. The paper further used random forest classification algorithm to compute proximities between pairs of cases that can be used in clustering locating outliers or by scaling to give interesting views of the data.

  4. Beyond the "c" and the "x": Learning with algorithms in massive open online courses (MOOCs)

    Science.gov (United States)

    Knox, Jeremy

    2018-02-01

    This article examines how algorithms are shaping student learning in massive open online courses (MOOCs). Following the dramatic rise of MOOC platform organisations in 2012, over 4,500 MOOCs have been offered to date, in increasingly diverse languages, and with a growing requirement for fees. However, discussions of learning in MOOCs remain polarised around the "xMOOC" and "cMOOC" designations. In this narrative, the more recent extended or platform MOOC ("xMOOC") adopts a broadcast pedagogy, assuming a direct transmission of information to its largely passive audience (i.e. a teacher-centred approach), while the slightly older connectivist model ("cMOOC") offers only a simplistic reversal of the hierarchy, posing students as highly motivated, self-directed and collaborative learners (i.e. a learner-centred approach). The online nature of both models generates data (e.g. on how many times a particular resource was viewed, or the ways in which participants communicated with each other) which MOOC providers use for analysis, albeit only after these data have been selectively processed. Central to many learning analytics approaches is the desire to predict students' future behaviour. Educators need to be aware that MOOC learning is not just about teachers and students, but that it also involves algorithms: instructions which perform automated calculations on data. Education is becoming embroiled in an "algorithmic culture" that defines educational roles, forecasts attainment, and influences pedagogy. Established theories of learning appear wholly inadequate in addressing the agential role of algorithms in the educational domain of the MOOC. This article identifies and examines four key areas where algorithms influence the activities of the MOOC: (1) data capture and discrimination; (2) calculated learners; (3) feedback and entanglement; and (4) learning with algorithms. The article concludes with a call for further research in these areas to surface a critical

  5. Online identification algorithms for integrated dielectric electroactive polymer sensors and self-sensing concepts

    International Nuclear Information System (INIS)

    Hoffstadt, Thorben; Griese, Martin; Maas, Jürgen

    2014-01-01

    Transducers based on dielectric electroactive polymers (DEAP) use electrostatic pressure to convert electric energy into strain energy or vice versa. Besides this, they are also designed for sensor applications in monitoring the actual stretch state on the basis of the deformation dependent capacitive–resistive behavior of the DEAP. In order to enable an efficient and proper closed loop control operation of these transducers, e.g. in positioning or energy harvesting applications, on the one hand, sensors based on DEAP material can be integrated into the transducers and evaluated externally, and on the other hand, the transducer itself can be used as a sensor, also in terms of self-sensing. For this purpose the characteristic electrical behavior of the transducer has to be evaluated in order to determine the mechanical state. Also, adequate online identification algorithms with sufficient accuracy and dynamics are required, independent from the sensor concept utilized, in order to determine the electrical DEAP parameters in real time. Therefore, in this contribution, algorithms are developed in the frequency domain for identifications of the capacitance as well as the electrode and polymer resistance of a DEAP, which are validated by measurements. These algorithms are designed for self-sensing applications, especially if the power electronics utilized is operated at a constant switching frequency, and parasitic harmonic oscillations are induced besides the desired DC value. These oscillations can be used for the online identification, so an additional superimposed excitation is no longer necessary. For this purpose a dual active bridge (DAB) is introduced to drive the DEAP transducer. The capabilities of the real-time identification algorithm in combination with the DAB are presented in detail and discussed, finally. (paper)

  6. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    Science.gov (United States)

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  7. On efficient randomized algorithms for finding the PageRank vector

    Science.gov (United States)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  8. Standard versus prosocial online support groups for distressed breast cancer survivors: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Golant Mitch

    2011-08-01

    Full Text Available Abstract Background The Internet can increase access to psychosocial care for breast cancer survivors through online support groups. This study will test a novel prosocial online group that emphasizes both opportunities for getting and giving help. Based on the helper therapy principle, it is hypothesized that the addition of structured helping opportunities and coaching on how to help others online will increase the psychological benefits of a standard online group. Methods/Design A two-armed randomized controlled trial with pretest and posttest. Non-metastatic breast cancer survivors with elevated psychological distress will be randomized to either a standard facilitated online group or to a prosocial facilitated online group, which combines online exchanges of support with structured helping opportunities (blogging, breast cancer outreach and coaching on how best to give support to others. Validated and reliable measures will be administered to women approximately one month before and after the interventions. Self-esteem, positive affect, and sense of belonging will be tested as potential mediators of the primary outcomes of depressive/anxious symptoms and sense of purpose in life. Discussion This study will test an innovative approach to maximizing the psychological benefits of cancer online support groups. The theory-based prosocial online support group intervention model is sustainable, because it can be implemented by private non-profit or other organizations, such as cancer centers, which mostly offer face-to-face support groups with limited patient reach. Trial Registration ClinicalTrials.gov: NCT01396174

  9. Standard versus prosocial online support groups for distressed breast cancer survivors: a randomized controlled trial.

    Science.gov (United States)

    Lepore, Stephen J; Buzaglo, Joanne S; Lieberman, Morton A; Golant, Mitch; Davey, Adam

    2011-08-25

    The Internet can increase access to psychosocial care for breast cancer survivors through online support groups. This study will test a novel prosocial online group that emphasizes both opportunities for getting and giving help. Based on the helper therapy principle, it is hypothesized that the addition of structured helping opportunities and coaching on how to help others online will increase the psychological benefits of a standard online group. A two-armed randomized controlled trial with pretest and posttest. Non-metastatic breast cancer survivors with elevated psychological distress will be randomized to either a standard facilitated online group or to a prosocial facilitated online group, which combines online exchanges of support with structured helping opportunities (blogging, breast cancer outreach) and coaching on how best to give support to others. Validated and reliable measures will be administered to women approximately one month before and after the interventions. Self-esteem, positive affect, and sense of belonging will be tested as potential mediators of the primary outcomes of depressive/anxious symptoms and sense of purpose in life. This study will test an innovative approach to maximizing the psychological benefits of cancer online support groups. The theory-based prosocial online support group intervention model is sustainable, because it can be implemented by private non-profit or other organizations, such as cancer centers, which mostly offer face-to-face support groups with limited patient reach. ClinicalTrials.gov: NCT01396174.

  10. An Optimal Online Resource Allocation Algorithm for Energy Harvesting Body Area Networks

    Directory of Open Access Journals (Sweden)

    Guangyuan Wu

    2018-01-01

    Full Text Available In Body Area Networks (BANs, how to achieve energy management to extend the lifetime of the body area networks system is one of the most critical problems. In this paper, we design a body area network system powered by renewable energy, in which the sensors carried by patient with energy harvesting module can transmit data to a personal device. We do not require any a priori knowledge of the stochastic nature of energy harvesting and energy consumption. We formulate a user utility optimization problem. We use Lyapunov Optimization techniques to decompose the problem into three sub-problems, i.e., battery management, collecting rate control and transmission power allocation. We propose an online resource allocation algorithm to achieve two major goals: (1 balancing sensors’ energy harvesting and energy consumption while stabilizing the BANs system; and (2 maximizing the user utility. Performance analysis addresses required battery capacity, bounded data queue length and optimality of the proposed algorithm. Simulation results verify the optimization of algorithm.

  11. Research on the Random Shock Vibration Test Based on the Filter-X LMS Adaptive Inverse Control Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Wei

    2016-01-01

    Full Text Available The related theory and algorithm of adaptive inverse control were presented through the research which pointed out the adaptive inverse control strategy could effectively eliminate the noise influence on the system control. Proposed using a frequency domain filter-X LMS adaptive inverse control algorithm, and the control algorithm was applied to the two-exciter hydraulic vibration test system of random shock vibration control process and summarized the process of the adaptive inverse control strategies in the realization of the random shock vibration test. The self-closed-loop and field test show that using the frequency-domain filter-X LMS adaptive inverse control algorithm can realize high precision control of random shock vibration test.

  12. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    Science.gov (United States)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  13. The Hidden Flow Structure and Metric Space of Network Embedding Algorithms Based on Random Walks.

    Science.gov (United States)

    Gu, Weiwei; Gong, Li; Lou, Xiaodan; Zhang, Jiang

    2017-10-13

    Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.

  14. Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy.

    Directory of Open Access Journals (Sweden)

    Chae Young Lee

    Full Text Available The purposes of this study were to optimize a proton computed tomography system (pCT for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy.

  15. Using rapidly-exploring random tree-based algorithms to find smooth and optimal trajectories

    CSIR Research Space (South Africa)

    Matebese, B

    2012-10-01

    Full Text Available -exploring random tree-based algorithms to fi nd smooth and optimal trajectories B MATEBESE1, MK BANDA2 AND S UTETE1 1CSIR Modelling and Digital Science, PO Box 395, Pretoria, South Africa, 0001 2Department of Applied Mathematics, Stellenbosch University... and complex environments. The RRT algorithm is the most popular and has the ability to find a feasible solution faster than other algorithms. The drawback of using RRT is that, as the number of samples increases, the probability that the algorithm converges...

  16. The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms

    Science.gov (United States)

    Raghavan, Prabhakar

    By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.

  17. Path Planning Algorithms for Autonomous Border Patrol Vehicles

    Science.gov (United States)

    Lau, George Tin Lam

    This thesis presents an online path planning algorithm developed for unmanned vehicles in charge of autonomous border patrol. In this Pursuit-Evasion game, the unmanned vehicle is required to capture multiple trespassers on its own before any of them reach a target safe house where they are safe from capture. The problem formulation is based on Isaacs' Target Guarding problem, but extended to the case of multiple evaders. The proposed path planning method is based on Rapidly-exploring random trees (RRT) and is capable of producing trajectories within several seconds to capture 2 or 3 evaders. Simulations are carried out to demonstrate that the resulting trajectories approach the optimal solution produced by a nonlinear programming-based numerical optimal control solver. Experiments are also conducted on unmanned ground vehicles to show the feasibility of implementing the proposed online path planning algorithm on physical applications.

  18. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    International Nuclear Information System (INIS)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K.

    2015-01-01

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of a random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.

  19. A novel Random Walk algorithm with Compulsive Evolution for heat exchanger network synthesis

    International Nuclear Information System (INIS)

    Xiao, Yuan; Cui, Guomin

    2017-01-01

    Highlights: • A novel Random Walk Algorithm with Compulsive Evolution is proposed for HENS. • A simple and feasible evolution strategy is presented in RWCE algorithm. • The integer and continuous variables of HEN are optimized simultaneously in RWCE. • RWCE is demonstrated a relatively strong global search ability in HEN optimization. - Abstract: The heat exchanger network (HEN) synthesis can be characterized as highly combinatorial, nonlinear and nonconvex, contributing to unmanageable computational time and a challenge in identifying the global optimal network design. Stochastic methods are robust and show a powerful global optimizing ability. Based on the common characteristic of different stochastic methods, namely randomness, a novel Random Walk algorithm with Compulsive Evolution (RWCE) is proposed to achieve the best possible total annual cost of heat exchanger network with the relatively simple and feasible evolution strategy. A population of heat exchanger networks is first randomly initialized. Next, the heat load of heat exchanger for each individual is randomly expanded or contracted in order to optimize both the integer and continuous variables simultaneously and to obtain the lowest total annual cost. Besides, when individuals approach to local optima, there is a certain probability for them to compulsively accept the imperfect networks in order to keep the population diversity and ability of global optimization. The presented method is then applied to heat exchanger network synthesis cases from the literature to compare the best results published. RWCE consistently has a lower computed total annual cost compared to previously published results.

  20. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    Science.gov (United States)

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  1. Beyond the "c" and the "x": Learning with Algorithms in Massive Open Online Courses (MOOCs)

    Science.gov (United States)

    Knox, Jeremy

    2018-01-01

    This article examines how algorithms are shaping student learning in massive open online courses (MOOCs). Following the dramatic rise of MOOC platform organisations in 2012, over 4,500 MOOCs have been offered to date, in increasingly diverse languages, and with a growing requirement for fees. However, discussions of "learning" in MOOCs…

  2. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.

    Science.gov (United States)

    Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto

    2017-09-29

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  3. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2017-09-01

    Full Text Available The use of Unmanned Aerial Vehicles (UAVs has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs’ flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  4. A Particle Swarm Optimization Algorithm with Variable Random Functions and Mutation

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xiao-Jun; YANG Chun-Hua; GUI Wei-Hua; DONG Tian-Xue

    2014-01-01

    The convergence analysis of the standard particle swarm optimization (PSO) has shown that the changing of random functions, personal best and group best has the potential to improve the performance of the PSO. In this paper, a novel strategy with variable random functions and polynomial mutation is introduced into the PSO, which is called particle swarm optimization algorithm with variable random functions and mutation (PSO-RM). Random functions are adjusted with the density of the population so as to manipulate the weight of cognition part and social part. Mutation is executed on both personal best particle and group best particle to explore new areas. Experiment results have demonstrated the effectiveness of the strategy.

  5. Ant Colony Clustering Algorithm and Improved Markov Random Fusion Algorithm in Image Segmentation of Brain Images

    Directory of Open Access Journals (Sweden)

    Guohua Zou

    2016-12-01

    Full Text Available New medical imaging technology, such as Computed Tomography and Magnetic Resonance Imaging (MRI, has been widely used in all aspects of medical diagnosis. The purpose of these imaging techniques is to obtain various qualitative and quantitative data of the patient comprehensively and accurately, and provide correct digital information for diagnosis, treatment planning and evaluation after surgery. MR has a good imaging diagnostic advantage for brain diseases. However, as the requirements of the brain image definition and quantitative analysis are always increasing, it is necessary to have better segmentation of MR brain images. The FCM (Fuzzy C-means algorithm is widely applied in image segmentation, but it has some shortcomings, such as long computation time and poor anti-noise capability. In this paper, firstly, the Ant Colony algorithm is used to determine the cluster centers and the number of FCM algorithm so as to improve its running speed. Then an improved Markov random field model is used to improve the algorithm, so that its antinoise ability can be improved. Experimental results show that the algorithm put forward in this paper has obvious advantages in image segmentation speed and segmentation effect.

  6. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    Science.gov (United States)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  7. A lower bound on deterministic online algorithms for scheduling on related machines without preemption

    Czech Academy of Sciences Publication Activity Database

    Ebenlendr, Tomáš; Sgall, J.

    2015-01-01

    Roč. 56, č. 1 (2015), s. 73-81 ISSN 1432-4350 R&D Projects: GA ČR GBP202/12/G061; GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : online algorithms * scheduling * makespan Subject RIV: IN - Informatics, Computer Science Impact factor: 0.719, year: 2015 http://link.springer.com/article/10.1007%2Fs00224-013-9451-6

  8. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

    Science.gov (United States)

    Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

    2017-12-01

    In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

  9. Random noise suppression of seismic data using non-local Bayes algorithm

    Science.gov (United States)

    Chang, De-Kuan; Yang, Wu-Yang; Wang, Yi-Hui; Yang, Qing; Wei, Xin-Jian; Feng, Xiao-Ying

    2018-02-01

    For random noise suppression of seismic data, we present a non-local Bayes (NL-Bayes) filtering algorithm. The NL-Bayes algorithm uses the Gaussian model instead of the weighted average of all similar patches in the NL-means algorithm to reduce the fuzzy of structural details, thereby improving the denoising performance. In the denoising process of seismic data, the size and the number of patches in the Gaussian model are adaptively calculated according to the standard deviation of noise. The NL-Bayes algorithm requires two iterations to complete seismic data denoising, but the second iteration makes use of denoised seismic data from the first iteration to calculate the better mean and covariance of the patch Gaussian model for improving the similarity of patches and achieving the purpose of denoising. Tests with synthetic and real data sets demonstrate that the NL-Bayes algorithm can effectively improve the SNR and preserve the fidelity of seismic data.

  10. Robust video watermarking via optimization algorithm for quantization of pseudo-random semi-global statistics

    Science.gov (United States)

    Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam

    2005-03-01

    In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

  11. A Comparison of Online versus On-site Training in Health Research Methodology: A Randomized Study

    Directory of Open Access Journals (Sweden)

    Kanchanaraksa Sukon

    2011-06-01

    Full Text Available Abstract Background Distance learning may be useful for building health research capacity. However, evidence that it can improve knowledge and skills in health research, particularly in resource-poor settings, is limited. We compared the impact and acceptability of teaching two distinct content areas, Biostatistics and Research Ethics, through either on-line distance learning format or traditional on-site training, in a randomized study in India. Our objective was to determine whether on-line courses in Biostatistics and Research Ethics could achieve similar improvements in knowledge, as traditional on-site, classroom-based courses. Methods Subjects: Volunteer Indian scientists were randomly assigned to one of two arms. Intervention: Students in Arm 1 attended a 3.5-day on-site course in Biostatistics and completed a 3.5-week on-line course in Research Ethics. Students in Arm 2 attended a 3.5-week on-line course in Biostatistics and 3.5-day on-site course in Research Ethics. For the two course formats, learning objectives, course contents and knowledge tests were identical. Main Outcome Measures: Improvement in knowledge immediately and 3-months after course completion, compared to baseline. Results Baseline characteristics were similar in both arms (n = 29 each. Median knowledge score for Biostatistics increased from a baseline of 49% to 64% (p Conclusion On-line and on-site training formats led to marked and similar improvements of knowledge in Biostatistics and Research Ethics. This, combined with logistical and cost advantages of on-line training, may make on-line courses particularly useful for expanding health research capacity in resource-limited settings.

  12. Personalized PageRank Clustering: A graph clustering algorithm based on random walks

    Science.gov (United States)

    A. Tabrizi, Shayan; Shakery, Azadeh; Asadpour, Masoud; Abbasi, Maziar; Tavallaie, Mohammad Ali

    2013-11-01

    Graph clustering has been an essential part in many methods and thus its accuracy has a significant effect on many applications. In addition, exponential growth of real-world graphs such as social networks, biological networks and electrical circuits demands clustering algorithms with nearly-linear time and space complexity. In this paper we propose Personalized PageRank Clustering (PPC) that employs the inherent cluster exploratory property of random walks to reveal the clusters of a given graph. We combine random walks and modularity to precisely and efficiently reveal the clusters of a graph. PPC is a top-down algorithm so it can reveal inherent clusters of a graph more accurately than other nearly-linear approaches that are mainly bottom-up. It also gives a hierarchy of clusters that is useful in many applications. PPC has a linear time and space complexity and has been superior to most of the available clustering algorithms on many datasets. Furthermore, its top-down approach makes it a flexible solution for clustering problems with different requirements.

  13. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  14. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    Science.gov (United States)

    Zhang, G.

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  15. Online Estimation of Time-Varying Volatility Using a Continuous-Discrete LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Jacques Oksman

    2008-09-01

    Full Text Available The following paper addresses a problem of inference in financial engineering, namely, online time-varying volatility estimation. The proposed method is based on an adaptive predictor for the stock price, built from an implicit integration formula. An estimate for the current volatility value which minimizes the mean square prediction error is calculated recursively using an LMS algorithm. The method is then validated on several synthetic examples as well as on real data. Throughout the illustration, the proposed method is compared with both UKF and offline volatility estimation.

  16. Guided online or face-to-face cognitive behavioral treatment for insomnia: A randomized wait-list controlled trial

    NARCIS (Netherlands)

    Lancee, J.; van Straten, A.; Morina, N.; Kaldo, V.; Kamphuis, J.H.

    2016-01-01

    Study Objectives: To compare the efficacy of guided online and individual face-to-face cognitive behavioral treatment for insomnia (CBT-I) to a wait-list condition. Methods: A randomized controlled trial comparing three conditions: guided online; face-to-face; wait-list. Posttest measurements were

  17. Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm. (On-Line Harmonics Estimation Application

    Directory of Open Access Journals (Sweden)

    Eyad K Almaita

    2017-03-01

    Keywords: Energy efficiency, Power quality, Radial basis function, neural networks, adaptive, harmonic. Article History: Received Dec 15, 2016; Received in revised form Feb 2nd 2017; Accepted 13rd 2017; Available online How to Cite This Article: Almaita, E.K and Shawawreh J.Al (2017 Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm (On-Line Harmonics Estimation Application.  International Journal of Renewable Energy Develeopment, 6(1, 9-17. http://dx.doi.org/10.14710/ijred.6.1.9-17

  18. A new approach of optimal control for a class of continuous-time chaotic systems by an online ADP algorithm

    Science.gov (United States)

    Song, Rui-Zhuo; Xiao, Wen-Dong; Wei, Qing-Lai

    2014-05-01

    We develop an online adaptive dynamic programming (ADP) based optimal control scheme for continuous-time chaotic systems. The idea is to use the ADP algorithm to obtain the optimal control input that makes the performance index function reach an optimum. The expression of the performance index function for the chaotic system is first presented. The online ADP algorithm is presented to achieve optimal control. In the ADP structure, neural networks are used to construct a critic network and an action network, which can obtain an approximate performance index function and the control input, respectively. It is proven that the critic parameter error dynamics and the closed-loop chaotic systems are uniformly ultimately bounded exponentially. Our simulation results illustrate the performance of the established optimal control method.

  19. Online CBT life skills programme for low mood and anxiety: study protocol for a pilot randomized controlled trial.

    Science.gov (United States)

    Williams, Christopher; McClay, Carrie-Anne; Martinez, Rebeca; Morrison, Jill; Haig, Caroline; Jones, Ray; Farrand, Paul

    2016-04-27

    Low mood is a common mental health problem with significant health consequences. Studies have shown that cognitive behavioural therapy (CBT) is an effective treatment for low mood and anxiety when delivered one-to-one by an expert practitioner. However, access to this talking therapy is often limited and waiting lists can be long, although a range of low-intensity interventions that can increase access to services are available. These include guided self-help materials delivered via books, classes and online packages. This project aims to pilot a randomized controlled trial of an online CBT-based life skills course with community-based individuals experiencing low mood and anxiety. Individuals with elevated symptoms of depression will be recruited directly from the community via online and newspaper advertisements. Participants will be remotely randomized to receive either immediate access or delayed access to the Living Life to the Full guided online CBT-based life skills package, with telephone or email support provided whilst they use the online intervention. The primary end point will be at 3 months post-randomization, at which point the delayed-access group will be offered the intervention. Levels of depression, anxiety, social functioning and satisfaction will be assessed. This pilot study will test the trial design, and ability to recruit and deliver the intervention. Drop-out rates will be assessed and the completion and acceptability of the package will be investigated. The study will also inform a sample size power calculation for a subsequent substantive randomized controlled trial. ISRCTN ISRCTN12890709.

  20. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  1. An efficient randomized algorithm for contact-based NMR backbone resonance assignment.

    Science.gov (United States)

    Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal

    2006-01-15

    Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to

  2. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  3. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.

    Science.gov (United States)

    Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.

  4. Cryptographic analysis on the key space of optical phase encryption algorithm based on the design of discrete random phase mask

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Li, Zengyan

    2013-07-01

    The key space of phase encryption algorithm using discrete random phase mask is investigated by numerical simulation in this paper. Random phase mask with finite and discrete phase levels is considered as the core component in most practical optical encryption architectures. The key space analysis is based on the design criteria of discrete random phase mask. The role of random amplitude mask and random phase mask in optical encryption system is identified from the perspective of confusion and diffusion. The properties of discrete random phase mask in a practical double random phase encoding scheme working in both amplitude encoding (AE) and phase encoding (PE) modes are comparably analyzed. The key space of random phase encryption algorithm is evaluated considering both the encryption quality and the brute-force attack resistibility. A method for enlarging the key space of phase encryption algorithm is also proposed to enhance the security of optical phase encryption techniques.

  5. A Novel Path Planning for Robots Based on Rapidly-Exploring Random Tree and Particle Swarm Optimizer Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Feng

    2013-09-01

    Full Text Available A based on Rapidly-exploring Random Tree(RRT and Particle Swarm Optimizer (PSO for path planning of the robot is proposed.First the grid method is built to describe the working space of the mobile robot,then the Rapidly-exploring Random Tree algorithm is used to obtain the global navigation path,and the Particle Swarm Optimizer algorithm is adopted to get the better path.Computer experiment results demonstrate that this novel algorithm can plan an optimal path rapidly in a cluttered environment.The successful obstacle avoidance is achieved,and the model is robust and performs reliably.

  6. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    Energy Technology Data Exchange (ETDEWEB)

    Maire, Sylvain, E-mail: maire@univ-tln.fr [Laboratoire LSIS Equipe Signal et Image, Université du Sud Toulon-Var, Av. Georges Pompidou, BP 56, 83162 La Valette du Var Cedex (France); Simon, Martin, E-mail: simon@math.uni-mainz.de [Institute of Mathematics, Johannes Gutenberg University, 55099 Mainz (Germany)

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance of the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.

  7. Randomized and quantum algorithms for solving initial-value problems in ordinary differential equations of order k

    Directory of Open Access Journals (Sweden)

    Maciej Goćwin

    2008-01-01

    Full Text Available The complexity of initial-value problems is well studied for systems of equations of first order. In this paper, we study the \\(\\varepsilon\\-complexity for initial-value problems for scalar equations of higher order. We consider two models of computation, the randomized model and the quantum model. We construct almost optimal algorithms adjusted to scalar equations of higher order, without passing to systems of first order equations. The analysis of these algorithms allows us to establish upper complexity bounds. We also show (almost matching lower complexity bounds. The \\(\\varepsilon\\-complexity in the randomized and quantum setting depends on the regularity of the right-hand side function, but is independent of the order of equation. Comparing the obtained bounds with results known in the deterministic case, we see that randomized algorithms give us a speed-up by \\(1/2\\, and quantum algorithms by \\(1\\ in the exponent. Hence, the speed-up does not depend on the order of equation, and is the same as for the systems of equations of first order. We also include results of some numerical experiments which confirm theoretical results.

  8. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  9. Cluster randomized controlled trial of a consumer behavior intervention to improve healthy food purchases from online canteens.

    Science.gov (United States)

    Delaney, Tessa; Wyse, Rebecca; Yoong, Sze Lin; Sutherland, Rachel; Wiggers, John; Ball, Kylie; Campbell, Karen; Rissel, Chris; Lecathelinais, Christophe; Wolfenden, Luke

    2017-11-01

    Background: School canteens represent an opportune setting in which to deliver public health nutrition strategies because of their wide reach and frequent use by children. Online school-canteen ordering systems, where students order and pay for their lunch online, provide an avenue to improve healthy canteen purchases through the application of consumer-behavior strategies that have an impact on purchasing decisions. Objective: We assessed the efficacy of a consumer-behavior intervention implemented in an online school-canteen ordering system in reducing the energy, saturated fat, sugar, and sodium contents of primary student lunch orders. Design: A cluster-randomized controlled trial was conducted that involved 2714 students (aged 5-12 y) from 10 primary schools in New South Wales, Australia, who were currently using an online canteen ordering system. Schools were randomized in a 1:1 ratio to receive either the intervention (enhanced system) or the control (standard online ordering only). The intervention included consumer-behavior strategies that were integrated into the online ordering system (targeting menu labeling, healthy food availability, placement, and prompting). Results: Mean energy (difference: -567.25 kJ; 95% CI: -697.95, -436.55 kJ; P consumer-behavior intervention using an existing online canteen infrastructure to improve purchasing behavior from primary school canteens. Such an intervention may represent an appealing policy option as part of a broader government strategy to improve child public health nutrition. This trial was registered at www.anzctr.org.au as ACTRN12616000499482. © 2017 American Society for Nutrition.

  10. On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2015-01-01

    combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...

  11. An online community improves adherence in an internet-mediated walking program. Part 1: results of a randomized controlled trial.

    Science.gov (United States)

    Richardson, Caroline R; Buis, Lorraine R; Janney, Adrienne W; Goodrich, David E; Sen, Ananda; Hess, Michael L; Mehari, Kathleen S; Fortlage, Laurie A; Resnick, Paul J; Zikmund-Fisher, Brian J; Strecher, Victor J; Piette, John D

    2010-12-17

    Approximately half of American adults do not meet recommended physical activity guidelines. Face-to-face lifestyle interventions improve health outcomes but are unlikely to yield population-level improvements because they can be difficult to disseminate, expensive to maintain, and inconvenient for the recipient. In contrast, Internet-based behavior change interventions can be disseminated widely at a lower cost. However, the impact of some Internet-mediated programs is limited by high attrition rates. Online communities that allow participants to communicate with each other by posting and reading messages may decrease participant attrition. Our objective was to measure the impact of adding online community features to an Internet-mediated walking program on participant attrition and average daily step counts. This randomized controlled trial included sedentary, ambulatory adults who used email regularly and had at least 1 of the following: overweight (body mass index [BMI] ≥ 25), type 2 diabetes, or coronary artery disease. All participants (n = 324) wore enhanced pedometers throughout the 16-week intervention and uploaded step-count data to the study server. Participants could log in to the study website to view graphs of their walking progress, individually-tailored motivational messages, and weekly calculated goals. Participants were randomized to 1 of 2 versions of a Web-based walking program. Those randomized to the "online community" arm could post and read messages with other participants while those randomized to the "no online community" arm could not read or post messages. The main outcome measures were participant attrition and average daily step counts over 16 weeks. Multiple regression analyses assessed the effect of the online community access controlling for age, sex, disease status, BMI, and baseline step counts. Both arms significantly increased their average daily steps between baseline and the end of the intervention period, but there were no

  12. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  13. Evaluation of Intelligent Grouping Based on Learners' Collaboration Competence Level in Online Collaborative Learning Environment

    Science.gov (United States)

    Muuro, Maina Elizaphan; Oboko, Robert; Wagacha, Waiganjo Peter

    2016-01-01

    In this paper we explore the impact of an intelligent grouping algorithm based on learners' collaborative competency when compared with (a) instructor based Grade Point Average (GPA) method level and (b) random method, on group outcomes and group collaboration problems in an online collaborative learning environment. An intelligent grouping…

  14. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    Science.gov (United States)

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  15. Experimental implementation of a quantum random-walk search algorithm using strongly dipolar coupled spins

    International Nuclear Information System (INIS)

    Lu Dawei; Peng Xinhua; Du Jiangfeng; Zhu Jing; Zou Ping; Yu Yihua; Zhang Shanmin; Chen Qun

    2010-01-01

    An important quantum search algorithm based on the quantum random walk performs an oracle search on a database of N items with O(√(phN)) calls, yielding a speedup similar to the Grover quantum search algorithm. The algorithm was implemented on a quantum information processor of three-qubit liquid-crystal nuclear magnetic resonance (NMR) in the case of finding 1 out of 4, and the diagonal elements' tomography of all the final density matrices was completed with comprehensible one-dimensional NMR spectra. The experimental results agree well with the theoretical predictions.

  16. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav; Filová , Lenka; Richtarik, Peter

    2018-01-01

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  17. Solving the wind farm layout optimization problem using random search algorithm

    DEFF Research Database (Denmark)

    Feng, Ju; Shen, Wen Zhong

    2015-01-01

    , in which better results than the genetic algorithm (GA) and the old version of the RS algorithm are obtained. Second it is applied to the Horns Rev 1 WF, and the optimized layouts obtain a higher power production than its original layout, both for the real scenario and for two constructed scenarios......Wind farm (WF) layout optimization is to find the optimal positions of wind turbines (WTs) inside a WF, so as to maximize and/or minimize a single objective or multiple objectives, while satisfying certain constraints. In this work, a random search (RS) algorithm based on continuous formulation....... In this application, it is also found that in order to get consistent and reliable optimization results, up to 360 or more sectors for wind direction have to be used. Finally, considering the inevitable inter-annual variations in the wind conditions, the robustness of the optimized layouts against wind condition...

  18. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav

    2018-01-17

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  19. Enhancing inpatient psychotherapeutic treatment with online self-help: study protocol for a randomized controlled trial.

    Science.gov (United States)

    Zwerenz, Rüdiger; Becker, Jan; Knickenberg, Rudolf J; Hagen, Karin; Dreier, Michael; Wölfling, Klaus; Beutel, Manfred E

    2015-03-17

    Depression is one of the most debilitating and costly mental disorders. There is increasing evidence for the efficacy of online self-help in alleviating depression. Knowledge regarding the options of combining online self-help with inpatient psychotherapy is still limited. Therefore, we plan to evaluate an evidence-based self-help program (deprexis®; Gaia AG, Hamburg, Germany) to improve the efficacy of inpatient psychotherapy and to maintain treatment effects in the aftercare period. Depressed patients (n = 240) with private internet access aged between 18 and 65 are recruited during psychosomatic inpatient treatment. Participants are randomized to an intervention or control group at the beginning of inpatient treatment. The intervention group (n = 120) is offered an online self-help program with 12 weekly tasks, beginning during the inpatient treatment. The control group (n = 120) obtains access to an online platform with weekly updated information on depression for the same duration. Assessments are conducted at the beginning (T0) and the end of inpatient treatment (T1), at the end of intervention (T2) and 6 months after randomization (T3). The primary outcome is the depression score measured by the Beck Depression Inventory-II at T2. Secondary outcome measures include anxiety, self-esteem, quality of life, dysfunctional cognitions and work ability. We expect the intervention group to benefit from additional online self-help during inpatient psychotherapy and to maintain the benefits during follow-up. This could be an important approach to develop future concepts of inpatient psychotherapy. ClinicalTrials.gov Identifier: NCT02196896 (registered on 16 July 2014).

  20. Guided Online or Face-to-Face Cognitive Behavioral Treatment for Insomnia: A Randomized Wait-List Controlled Trial.

    Science.gov (United States)

    Lancee, Jaap; van Straten, Annemieke; Morina, Nexhmedin; Kaldo, Viktor; Kamphuis, Jan H

    2016-01-01

    To compare the efficacy of guided online and individual face-to-face cognitive behavioral treatment for insomnia (CBT-I) to a wait-list condition. A randomized controlled trial comparing three conditions: guided online; face-to-face; wait-list. Posttest measurements were administered to all conditions, along with 3- and 6-mo follow-up assessments to the online and face-to-face conditions. Ninety media-recruited participants meeting the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria for insomnia were randomly allocated to either guided online CBT-I (n = 30), individual face-to-face CBT-I (n = 30), or wait-list (n = 30). At post-assessment, the online (Cohen d = 1.2) and face-to-face (Cohen d = 2.3) intervention groups showed significantly larger treatment effects than the wait-list group on insomnia severity (insomnia severity index). Large treatment effects were also found for the sleep diary estimates (except for total sleep time), and anxiety and depression measures (for depression only in the face-to-face condition). Face-to-face treatment yielded a statistically larger treatment effect (Cohen d = 0.9) on insomnia severity than the online condition at all time points. In addition, a moderate differential effect size favoring face-to-face treatment emerged at the 3- and 6-mo follow-up on all sleep diary estimates. Face-to-face treatment further outperformed online treatment on depression and anxiety outcomes. These data show superior performance of face-to-face treatment relative to online treatment. Yet, our results also suggest that online treatment may offer a potentially cost-effective alternative to and complement face-to-face treatment. Clinicaltrials.gov, NCT01955850. A commentary on this article appears in this issue on page 13. © 2016 Associated Professional Sleep Societies, LLC.

  1. Development of an On-Line Self-Tuning FPGA-PID-PWM Control Algorithm Design for DC-DC Buck Converter in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Ahmed Sabah Al-Araji

    2017-08-01

    Full Text Available This paper presents a new development of an on-line hybrid self-tuning control algorithm of the Field Programmable Gate Array - Proportional Integral Derivative - Pulse Width Modulation (FPGA-PID-PWM controller for DC-DC buck converter which is used in battery operation of mobile applications. The main goal in this work is to propose structure of the hybrid Bees-PSO tuning control algorithm which has a capability of quickly and precisely searching in the global regions in order to obtain optimal gain parameters for the proposed controller to generate the best voltage control action to achieve the desired performance of the Buck converter output. Matlab simulation results and Xilinx development tool Integrated Software Environment (ISE experimental work show the robustness and effectiveness of the proposed on-line hybrid Bees-PSO tuning control algorithm in terms of obtaining smooth and unsaturated state voltage control action and minimizing the tracking voltage error of the Buck converter output. Moreover, the fitness evaluation number is reduced.

  2. Robust online tracking via adaptive samples selection with saliency detection

    Science.gov (United States)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  3. A subexponential lower bound for the Random Facet algorithm for Parity Games

    DEFF Research Database (Denmark)

    Friedmann, Oliver; Hansen, Thomas Dueholm; Zwick, Uri

    2011-01-01

    Parity Games form an intriguing family of infinite duration games whose solution is equivalent to the solution of important problems in automatic verification and automata theory. They also form a very natural subclass of Deterministic Mean Payoff Games, which in turn is a very natural subclass...... of turn-based Stochastic Mean Payoff Games. It is a major open problem whether these game families can be solved in polynomial time. The currently theoretically fastest algorithms for the solution of all these games are adaptations of the randomized algorithms of Kalai and of Matouˇsek, Sharir and Welzl...... for LP-type problems, an abstract generalization of linear programming. The expected running time of both algorithms is subexponential in the size of the game, i.e., 2O(√n log n), where n is the number of vertices in the game. We focus in this paper on the algorithm of Matouˇsek, Sharir and Welzl...

  4. Evolution of online algorithms in ATLAS and CMS in Run2

    CERN Document Server

    Tomei, Thiago R. F. P.

    2017-01-01

    The Large Hadron Collider has entered a new era in Run~2, with centre-of-mass energy of 13~TeV and instantaneous luminosity reaching $\\mathcal{L}_\\textrm{inst} = 1.4\\times$10\\textsuperscript{34}~cm\\textsuperscript{-2}~s\\textsuperscript{-1} for pp collisions. In order to cope with those harsher conditions, the ATLAS and CMS collaborations have improved their online selection infrastructure to keep a high efficiency for important physics processes -- like W, Z and Higgs bosons in their leptonic and diphoton modes -- whilst keeping the size of data stream compatible with the bandwidth and disk resources available. In this note, we describe some of the trigger improvements implemented for Run~2, including algorithms for selection of electrons, photons, muons and hadronic final states.

  5. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Kosar, Tevfik

    2010-05-20

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.

  6. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  7. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    Energy Technology Data Exchange (ETDEWEB)

    Soufi, M [Shahid Beheshti University, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  8. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    International Nuclear Information System (INIS)

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm 3 . For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  9. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    Science.gov (United States)

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy

  10. Random projections and the optimization of an algorithm for phase retrieval

    International Nuclear Information System (INIS)

    Elser, Veit

    2003-01-01

    Iterative phase retrieval algorithms typically employ projections onto constraint subspaces to recover the unknown phases in the Fourier transform of an image, or, in the case of x-ray crystallography, the electron density of a molecule. For a general class of algorithms, where the basic iteration is specified by the difference map, solutions are associated with fixed points of the map, the attractive character of which determines the effectiveness of the algorithm. The behaviour of the difference map near fixed points is controlled by the relative orientation of the tangent spaces of the two constraint subspaces employed by the map. Since the dimensionalities involved are always large in practical applications, it is appropriate to use random matrix theory ideas to analyse the average-case convergence at fixed points. Optimal values of the γ parameters of the difference map are found which differ somewhat from the values previously obtained on the assumption of orthogonal tangent spaces

  11. Focusing light through random photonic layers by four-element division algorithm

    Science.gov (United States)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-02-01

    The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.

  12. Research on electricity consumption forecast based on mutual information and random forests algorithm

    Science.gov (United States)

    Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu

    2018-02-01

    Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.

  13. Feasibility randomized-controlled trial of online Acceptance and Commitment Therapy for patients with complex chronic pain in the United Kingdom.

    Science.gov (United States)

    Scott, W; Chilcot, J; Guildford, B; Daly-Eichenhardt, A; McCracken, L M

    2018-04-28

    Acceptance and Commitment Therapy (ACT) has growing support for chronic pain. However, more accessible treatment delivery is needed. This study evaluated the feasibility of online ACT for patients with complex chronic pain in the United Kingdom to determine whether a larger trial is justified. Participants with chronic pain and clinically meaningful disability and distress were randomly assigned to ACT online plus specialty medical pain management, or specialty medical management alone. Participants completed questionnaires at baseline, and 3- and 9-month post-randomization. Primary feasibility outcomes included recruitment, retention and treatment completion rates. Secondary outcomes were between-groups effects on treatment outcomes and psychological flexibility. Of 139 potential participants, 63 were eligible and randomized (45% recruitment rate). Retention rates were 76-78% for follow-up assessments. Sixty-one per cent of ACT online participants completed treatment. ACT online was less often completed by employed (44%) compared to unemployed (80%) participants. Fifty-six per cent of ACT online participants rated themselves as 'much improved' or better on a global impression of change rating, compared to only 20 per cent of control participants. Three-month effects favouring ACT online were small for functioning, medication and healthcare use, committed action and decentring, medium for mood, and large for acceptance. Small-to-medium effects were maintained for functioning, healthcare use and committed action at 9 months. Online ACT for patients with chronic pain in the United Kingdom appears feasible to study in a larger efficacy trial. Some adjustments to treatment and trial procedures are warranted, particularly to enhance engagement among employed participants. This study supports the feasibility of online Acceptance and Commitment Therapy for chronic pain in the United Kingdom and a larger efficacy trial. Refinements to treatment delivery, particularly to

  14. An ELM Based Online Soft Sensing Approach for Alumina Concentration Detection

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2015-01-01

    Full Text Available The concentration of alumina in the electrolyte is of great significance during the production of aluminum; it may affect the stability of aluminum reduction cell and the current efficiency. However, the concentration of alumina is hard to be detected online because of the special circumstance in the aluminum reduction cell. At present, there is lack of fast and accurate soft sensing methods for alumina concentration and existing methods can not meet the needs for online measurement. In this paper, a novel soft sensing method based on a modified extreme learning machine (MELM for online measurement of the alumina concentration is proposed. The modified ELM algorithm is based on the enhanced random search which is called incremental extreme learning machine in some references. It randomly chooses the input weights and analytically determines the output weights without manual intervention. The simulation results show that the approach can give more accurate estimations of alumina concentration with faster learning speed compared with other methods such as BP and SVM.

  15. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  16. Online evolution reconstruction from a single measurement record with random time intervals for quantum communication

    Science.gov (United States)

    Zhou, Hua; Su, Yang; Wang, Rong; Zhu, Yong; Shen, Huiping; Pu, Tao; Wu, Chuanxin; Zhao, Jiyong; Zhang, Baofu; Xu, Zhiyong

    2017-10-01

    Online reconstruction of a time-variant quantum state from the encoding/decoding results of quantum communication is addressed by developing a method of evolution reconstruction from a single measurement record with random time intervals. A time-variant two-dimensional state is reconstructed on the basis of recovering its expectation value functions of three nonorthogonal projectors from a random single measurement record, which is composed from the discarded qubits of the six-state protocol. The simulated results prove that our method is robust to typical metro quantum channels. Our work extends the Fourier-based method of evolution reconstruction from the version for a regular single measurement record with equal time intervals to a unified one, which can be applied to arbitrary single measurement records. The proposed protocol of evolution reconstruction runs concurrently with the one of quantum communication, which can facilitate the online quantum tomography.

  17. Effectiveness of the random sequential absorption algorithm in the analysis of volume elements with nanoplatelets

    DEFF Research Database (Denmark)

    Pontefisso, Alessandro; Zappalorto, Michele; Quaresimin, Marino

    2016-01-01

    In this work, a study of the Random Sequential Absorption (RSA) algorithm in the generation of nanoplatelet Volume Elements (VEs) is carried out. The effect of the algorithm input parameters on the reinforcement distribution is studied through the implementation of statistical tools, showing...... that the platelet distribution is systematically affected by these parameters. The consequence is that a parametric analysis of the VE input parameters may be biased by hidden differences in the filler distribution. The same statistical tools used in the analysis are implemented in a modified RSA algorithm...

  18. Algorithms for the on-line travelling salesman

    NARCIS (Netherlands)

    Ausiello, G.; Feuerstein, E.; Leonardi, S.; Stougie, L.; Talamo, M.

    1999-01-01

    In this paper the problem of efficiently serving a sequence of requests presented in an on-line fashion located at points of a metric space is considered. We call this problem the On-Line Travelling Salesman Problem (OLTSP). It has a variety of relevant applications in logistics and robotics. We

  19. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  20. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  1. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  2. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.

    2017-08-22

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  3. Engineering Online and In-person Social Networks for Physical Activity: A Randomized Trial

    Science.gov (United States)

    Rovniak, Liza S.; Kong, Lan; Hovell, Melbourne F.; Ding, Ding; Sallis, James F.; Ray, Chester A.; Kraschnewski, Jennifer L.; Matthews, Stephen A.; Kiser, Elizabeth; Chinchilli, Vernon M.; George, Daniel R.; Sciamanna, Christopher N.

    2016-01-01

    Background Social networks can influence physical activity, but little is known about how best to engineer online and in-person social networks to increase activity. Purpose To conduct a randomized trial based on the Social Networks for Activity Promotion model to assess the incremental contributions of different procedures for building social networks on objectively-measured outcomes. Methods Physically inactive adults (n = 308, age, 50.3 (SD = 8.3) years, 38.3% male, 83.4% overweight/obese) were randomized to 1 of 3 groups. The Promotion group evaluated the effects of weekly emailed tips emphasizing social network interactions for walking (e.g., encouragement, informational support); the Activity group evaluated the incremental effect of adding an evidence-based online fitness walking intervention to the weekly tips; and the Social Networks group evaluated the additional incremental effect of providing access to an online networking site for walking, and prompting walking/activity across diverse settings. The primary outcome was mean change in accelerometer-measured moderate-to-vigorous physical activity (MVPA), assessed at 3 and 9 months from baseline. Results Participants increased their MVPA by 21.0 mins/week, 95% CI [5.9, 36.1], p = .005, at 3 months, and this change was sustained at 9 months, with no between-group differences. Conclusions Although the structure of procedures for targeting social networks varied across intervention groups, the functional effect of these procedures on physical activity was similar. Future research should evaluate if more powerful reinforcers improve the effects of social network interventions. Trial Registration Number NCT01142804 PMID:27405724

  4. An online spaced-education game for global continuing medical education: a randomized trial.

    Science.gov (United States)

    Kerfoot, B Price; Baker, Harley

    2012-07-01

    To assess the efficacy of a "spaced-education" game as a method of continuing medical education (CME) among physicians across the globe. The efficacy of educational games for the CME has yet to be established. We created a novel online educational game by incorporating game mechanics into "spaced education" (SE), an evidence-based method of online CME. This 34-week randomized trial enrolled practicing urologists across the globe. The SE game consisted of 40 validated multiple-choice questions and explanations on urology clinical guidelines. Enrollees were randomized to 2 cohorts: cohort A physicians were sent 2 questions via an automated e-mail system every 2 days, and cohort B physicians were sent 4 questions every 4 days. Adaptive game mechanics re-sent the questions in 12 or 24 days if answered incorrectly and correctly, respectively. Questions expired if not answered on time (appointment dynamic). Physicians retired questions by answering each correctly twice-in-a-row (progression dynamic). Competition was fostered by posting relative performance among physicians. Main outcome measures were baseline scores (percentage of questions answered correctly upon initial presentation) and completion scores (percentage of questions retired). A total of 1470 physicians from 63 countries enrolled. Median baseline score was 48% (interquartile range [IQR] 17) and, in multivariate analyses, was found to vary significantly by region (Cohen dmax = 0.31, P = 0.001) and age (dmax = 0.41, P games. An online SE game can substantially improve guidelines knowledge and is a well-accepted method of global CME delivery.

  5. Lattice Boltzmann simulation of flow and heat transfer in random porous media constructed by simulated annealing algorithm

    International Nuclear Information System (INIS)

    Liu, Minghua; Shi, Yong; Yan, Jiashu; Yan, Yuying

    2017-01-01

    Highlights: • A numerical capability combining the lattice Boltzmann method with simulated annealing algorithm is developed. • Digitized representations of random porous media are constructed using limited but meaningful statistical descriptors. • Pore-scale flow and heat transfer information in random porous media is obtained by the lattice Boltzmann simulation. • The effective properties at the representative elementary volume scale are well specified using appropriate upscale averaging. - Abstract: In this article, the lattice Boltzmann (LB) method for transport phenomena is combined with the simulated annealing (SA) algorithm for digitized porous-medium construction to study flow and heat transfer in random porous media. Importantly, in contrast to previous studies which simplify porous media as arrays of regularly shaped objects or effective pore networks, the LB + SA method in this article can model statistically meaningful random porous structures in irregular morphology, and simulate pore-scale transport processes inside them. Pore-scale isothermal flow and heat conduction in a set of constructed random porous media characterized by statistical descriptors were then simulated through use of the LB + SA method. The corresponding averages over the computational volumes and the related effective transport properties were also computed based on these pore scale numerical results. Good agreement between the numerical results and theoretical predictions or experimental data on the representative elementary volume scale was found. The numerical simulations in this article demonstrate combination of the LB method with the SA algorithm is a viable and powerful numerical strategy for simulating transport phenomena in random porous media in complex geometries.

  6. Extracting random numbers from quantum tunnelling through a single diode.

    Science.gov (United States)

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  7. Randomized Clinical Trial of Online Parent Training for Behavior Problems After Early Brain Injury.

    Science.gov (United States)

    Wade, Shari L; Cassedy, Amy E; Shultz, Emily L; Zang, Huaiyu; Zhang, Nanhua; Kirkwood, Michael W; Stancin, Terry; Yeates, Keith O; Taylor, H Gerry

    2017-11-01

    To evaluate the effectiveness of Internet-based Interacting Together Everyday: Recovery After Childhood TBI (I-InTERACT) versus abbreviated parent training (Express) or access to online resources (internet resources comparison [IRC]) in improving parenting skills and decreasing behavior problems after early traumatic brain injury (TBI). In this randomized, controlled, clinical trial, 113 children 3 to 9 years old previously hospitalized for moderate to severe TBI were randomly assigned to receive Express (n = 36), I-InTERACT (n = 39), or IRC (n = 38). Express included 7 online parent skills sessions, and I-InTERACT delivered 10 to 14 sessions addressing parenting skills, TBI education, stress, and anger management. The 2 interventions coupled online modules with therapist coaching through a Health Insurance Portability and Accountability Act-compliant Skype link. The IRC group received access to online TBI and parent skills resources. Co-primary outcomes were blinded ratings of parenting skills and parent report of behavior problems and problem intensity on the Eyberg Child Behavior Inventory (ECBI). Outcomes were assessed before treatment and 3 and 6 months after treatment, with the latter constituting the primary endpoint. The Express and I-InTERACT groups displayed higher levels of positive parenting at follow-up. Only the I-InTERACT group had lower levels of negative parenting at 6 months. The Express group had lower ECBI intensity scores than the IRC group. Baseline symptom levels moderated improvements; children in the Express and I-InTERACT groups with higher baseline symptoms demonstrated greater improvements than those in the IRC group. Changes in parenting skills mediated improvements in behavior in those with higher baseline symptoms. Brief online parent skills training can effectively decrease behavior problems after early TBI in children with existing behavioral symptoms. Clinical trial registration information-Internet-based Interacting Together

  8. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  9. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    Science.gov (United States)

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  10. Implementation of on-line data reduction algorithms in the CMS Endcap Preshower Data Concentrator Cards

    CERN Document Server

    Barney, D; Kokkas, P; Manthos, N; Sidiropoulos, G; Reynaud, S; Vichoudis, P

    2007-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from two, three or four sensors. For the readout of the Preshower, a VME-based system, the Endcap Preshower Data Concentrator Card (ES-DCC), is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction via zero suppression and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation in the ES-DCC FPGAs. These algorithms, as implemented in the ES-DCC, result in a data-reduction factor of 20.

  11. Implementation of On-Line Data Reduction Algorithms in the CMS Endcap Preshower Data Concentrator Card

    CERN Document Server

    Barney, David; Kokkas, Panagiotis; Manthos, Nikolaos; Reynaud, Serge; Sidiropoulos, Georgios; Vichoudis, Paschalis

    2006-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from 2, 3 or 4 sensors. For the readout of the Preshower, a VME-based system - the Endcap Preshower Data Concentrator Card (ES-DCC) is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction (zero suppression) and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation into the ES-DCC FPGAs. The algorithms implemented into the ES-DCC resulted in a reduction factor of ~20.

  12. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  13. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    International Nuclear Information System (INIS)

    Vamos, Calin; Suciu, Nicolae; Vereecken, Harry

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested

  14. Fault diagnosis in spur gears based on genetic algorithm and random forest

    Science.gov (United States)

    Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan

    2016-03-01

    There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.

  15. Engineering Online and In-Person Social Networks for Physical Activity: A Randomized Trial.

    Science.gov (United States)

    Rovniak, Liza S; Kong, Lan; Hovell, Melbourne F; Ding, Ding; Sallis, James F; Ray, Chester A; Kraschnewski, Jennifer L; Matthews, Stephen A; Kiser, Elizabeth; Chinchilli, Vernon M; George, Daniel R; Sciamanna, Christopher N

    2016-12-01

    Social networks can influence physical activity, but little is known about how best to engineer online and in-person social networks to increase activity. The purpose of this study was to conduct a randomized trial based on the Social Networks for Activity Promotion model to assess the incremental contributions of different procedures for building social networks on objectively measured outcomes. Physically inactive adults (n = 308, age, 50.3 (SD = 8.3) years, 38.3 % male, 83.4 % overweight/obese) were randomized to one of three groups. The Promotion group evaluated the effects of weekly emailed tips emphasizing social network interactions for walking (e.g., encouragement, informational support); the Activity group evaluated the incremental effect of adding an evidence-based online fitness walking intervention to the weekly tips; and the Social Networks group evaluated the additional incremental effect of providing access to an online networking site for walking as well as prompting walking/activity across diverse settings. The primary outcome was mean change in accelerometer-measured moderate-to-vigorous physical activity (MVPA), assessed at 3 and 9 months from baseline. Participants increased their MVPA by 21.0 min/week, 95 % CI [5.9, 36.1], p = .005, at 3 months, and this change was sustained at 9 months, with no between-group differences. Although the structure of procedures for targeting social networks varied across intervention groups, the functional effect of these procedures on physical activity was similar. Future research should evaluate if more powerful reinforcers improve the effects of social network interventions. The trial was registered with the ClinicalTrials.gov (NCT01142804).

  16. Online Problem Solving for Adolescent Brain Injury: A Randomized Trial of 2 Approaches.

    Science.gov (United States)

    Wade, Shari L; Taylor, Hudson Gerry; Yeates, Keith Owen; Kirkwood, Michael; Zang, Huaiyu; McNally, Kelly; Stacin, Terry; Zhang, Nanhua

    Adolescent traumatic brain injury (TBI) contributes to deficits in executive functioning and behavior, but few evidence-based treatments exist. We conducted a randomized clinical trial comparing Teen Online Problem Solving with Family (TOPS-Family) with Teen Online Problem Solving with Teen Only (TOPS-TO) or the access to Internet Resources Comparison (IRC) group. Children, aged 11 to 18 years, who sustained a complicated mild-to-severe TBI in the previous 18 months were randomly assigned to the TOPS-Family (49), TOPS-TO (51), or IRC group (52). Parent and self-report measures of externalizing behaviors and executive functioning were completed before treatment and 6 months later. Treatment effects were examined using linear regression models, adjusting for baseline symptom levels. Age, maternal education, and family stresses were examined as moderators. The TOPS-Family group had lower levels of parent-reported executive dysfunction at follow-up than the TOPS-TO group, and differences between the TOPS-Family and IRC groups approached significance. Maternal education moderated improvements in parent-reported externalizing behaviors, with less educated parents in the TOPS-Family group reporting fewer symptoms. On the self-report Behavior Rating Inventory of Executive Functions, treatment efficacy varied with the level of parental stresses. The TOPS-Family group reported greater improvements at low stress levels, whereas the TOPS-TO group reported greater improvement at high-stress levels. The TOPS-TO group did not have significantly lower symptoms than the IRC group on any comparison. Findings support the efficacy of online family problem solving to address executive dysfunction and improve externalizing behaviors among youth with TBI from less advantaged households. Treatment with the teen alone may be indicated in high-stress families.

  17. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  18. Recruitment to online therapies for depression: pilot cluster randomized controlled trial.

    Science.gov (United States)

    Jones, Ray B; Goldsmith, Lesley; Hewson, Paul; Williams, Christopher J

    2013-03-05

    Raising awareness of online cognitive behavioral therapy (CBT) could benefit many people with depression, but we do not know how purchasing online advertising compares to placing free links from relevant local websites in increasing uptake. To pilot a cluster randomized controlled trial (RCT) comparing purchase of Google AdWords with placing free website links in raising awareness of online CBT resources for depression in order to better understand research design issues. We compared two online interventions with a control without intervention. The pilot RCT had 4 arms, each with 4 British postcode areas: (A) geographically targeted AdWords, (B) adverts placed on local websites by contacting website owners and requesting links be added, (C) both interventions, (D) control. Participants were directed to our research project website linking to two freely available online CBT resource sites (Moodgym and Living Life To The Full (LLTTF)) and two other depression support sites. We used data from (1) AdWords, (2) Google Analytics for our project website and for LLTTF, and (3) research project website. We compared two outcomes: (1) numbers with depression accessing the research project website, and then chose an onward link to one of the two CBT websites, and (2) numbers registering with LLTTF. We documented costs, and explored intervention and assessment methods to make general recommendations to inform researchers aiming to use similar methodologies in future studies. Trying to place local website links appeared much less cost effective than AdWords and although may prove useful for service delivery, was not worth pursuing in the context of the current study design. Our AdWords intervention was effective in recruiting people to the project website but our location targeting "leaked" and was not as geographically specific as claimed. The impact on online CBT was also diluted by offering participants other choices of destinations. Measuring the impact on LLTTF use was

  19. Scientific writing: a randomized controlled trial comparing standard and on-line instruction

    Directory of Open Access Journals (Sweden)

    Phadtare Amruta

    2009-05-01

    Full Text Available Abstract Background Writing plays a central role in the communication of scientific ideas and is therefore a key aspect in researcher education, ultimately determining the success and long-term sustainability of their careers. Despite the growing popularity of e-learning, we are not aware of any existing study comparing on-line vs. traditional classroom-based methods for teaching scientific writing. Methods Forty eight participants from a medical, nursing and physiotherapy background from US and Brazil were randomly assigned to two groups (n = 24 per group: An on-line writing workshop group (on-line group, in which participants used virtual communication, google docs and standard writing templates, and a standard writing guidance training (standard group where participants received standard instruction without the aid of virtual communication and writing templates. Two outcomes, manuscript quality was assessed using the scores obtained in Six subgroup analysis scale as the primary outcome measure, and satisfaction scores with Likert scale were evaluated. To control for observer variability, inter-observer reliability was assessed using Fleiss's kappa. A post-hoc analysis comparing rates of communication between mentors and participants was performed. Nonparametric tests were used to assess intervention efficacy. Results Excellent inter-observer reliability among three reviewers was found, with an Intraclass Correlation Coefficient (ICC agreement = 0.931882 and ICC consistency = 0.932485. On-line group had better overall manuscript quality (p = 0.0017, SSQSavg score 75.3 ± 14.21, ranging from 37 to 94 compared to the standard group (47.27 ± 14.64, ranging from 20 to 72. Participant satisfaction was higher in the on-line group (4.3 ± 0.73 compared to the standard group (3.09 ± 1.11 (p = 0.001. The standard group also had fewer communication events compared to the on-line group (0.91 ± 0.81 vs. 2.05 ± 1.23; p = 0.0219. Conclusion Our protocol

  20. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Online variational Bayesian filtering-based mobile target tracking in wireless sensor networks.

    Science.gov (United States)

    Zhou, Bingpeng; Chen, Qingchun; Li, Tiffany Jing; Xiao, Pei

    2014-11-11

    The received signal strength (RSS)-based online tracking for a mobile node in wireless sensor networks (WSNs) is investigated in this paper. Firstly, a multi-layer dynamic Bayesian network (MDBN) is introduced to characterize the target mobility with either directional or undirected movement. In particular, it is proposed to employ the Wishart distribution to approximate the time-varying RSS measurement precision's randomness due to the target movement. It is shown that the proposed MDBN offers a more general analysis model via incorporating the underlying statistical information of both the target movement and observations, which can be utilized to improve the online tracking capability by exploiting the Bayesian statistics. Secondly, based on the MDBN model, a mean-field variational Bayesian filtering (VBF) algorithm is developed to realize the online tracking of a mobile target in the presence of nonlinear observations and time-varying RSS precision, wherein the traditional Bayesian filtering scheme cannot be directly employed. Thirdly, a joint optimization between the real-time velocity and its prior expectation is proposed to enable online velocity tracking in the proposed online tacking scheme. Finally, the associated Bayesian Cramer-Rao Lower Bound (BCRLB) analysis and numerical simulations are conducted. Our analysis unveils that, by exploiting the potential state information via the general MDBN model, the proposed VBF algorithm provides a promising solution to the online tracking of a mobile node in WSNs. In addition, it is shown that the final tracking accuracy linearly scales with its expectation when the RSS measurement precision is time-varying.

  2. An improved multileaving algorithm for online ranker evaluation

    DEFF Research Database (Denmark)

    Brost, Brian; Cox, Ingemar Johansson; Seldin, Yevgeny

    2016-01-01

    Online ranker evaluation is a key challenge in information retrieval. An important task in the online evaluation of rankers is using implicit user feedback for inferring preferences between rankers. Interleaving methods have been found to be ecient and sensitive, i.e. they can quickly detect even...

  3. Optimization of Aero Engine Acceleration Control in Combat State Based on Genetic Algorithms

    Science.gov (United States)

    Li, Jie; Fan, Ding; Sreeram, Victor

    2012-03-01

    In order to drastically exploit the potential of the aero engine and improve acceleration performance in the combat state, an on-line optimized controller based on genetic algorithms is designed for an aero engine. For testing the validity of the presented control method, detailed joint simulation tests of the designed controller and the aero engine model are performed in the whole flight envelope. Simulation test results show that the presented control algorithm has characteristics of rapid convergence speed, high efficiency and can fully exploit the acceleration performance potential of the aero engine. Compared with the former controller, the designed on-line optimized controller (DOOC) can improve the security of the acceleration process and greatly enhance the aero engine thrust in the whole range of the flight envelope, the thrust increases an average of 8.1% in the randomly selected working states. The plane which adopts DOOC can acquire better fighting advantage in the combat state.

  4. How Effective Is Algorithm-Guided Treatment for Depressed Inpatients? Results from the Randomized Controlled Multicenter German Algorithm Project 3 Trial.

    Science.gov (United States)

    Adli, Mazda; Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael

    2017-09-01

    Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (Palgorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  5. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    Science.gov (United States)

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  6. A randomized controlled trial of an online, modular, active learning training program for behavioral activation for depression.

    Science.gov (United States)

    Puspitasari, Ajeng J; Kanter, Jonathan W; Busch, Andrew M; Leonard, Rachel; Dunsiger, Shira; Cahill, Shawn; Martell, Christopher; Koerner, Kelly

    2017-08-01

    This randomized-controlled trial assessed the efficacy of a trainer-led, active-learning, modular, online behavioral activation (BA) training program compared with a self-paced online BA training with the same modular content. Seventy-seven graduate students (M = 30.3 years, SD = 6.09; 76.6% female) in mental health training programs were randomly assigned to receive either the trainer-led or self-paced BA training. Both trainings consisted of 4 weekly sessions covering 4 core BA strategies. Primary outcomes were changes in BA skills as measured by an objective role-play assessment and self-reported use of BA strategies. Assessments were conducted at pre-, post-, and 6-weeks after training. A series of longitudinal mixed effect models assessed changes in BA skills and a longitudinal model implemented with generalized estimating equations assessed BA use over time. Significantly greater increases in total BA skills were found in the trainer-led training condition. The trainer-led training condition also showed greater increases in all core BA skills either at posttraining, follow-up, or both. Reported use of BA strategies with actual clients increased significantly from pre- to posttraining and maintained at follow-up in both training conditions. This trial adds to the literature on the efficacy of online training as a method to disseminate BA. Online training with an active learning, modular approach may be a promising and accessible implementation strategy. Additional strategies may need to be paired with the online BA training to assure the long-term implementation and sustainability of BA in clinical practice. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. FPGA helix tracking algorithm for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yutie; Galuska, Martin; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David [II. Physikalisches Institut, University of Giessen (Germany); Ye, Hua [Institute of High Energy Physics, CAS (China); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking algorithm for helix tracking reconstruction in the solenoidal field is shown. The VHDL-based algorithm is tested with different types of events, at different event rate. Furthermore, a study of T0 extraction from the tracking algorithm is performed. A concept of simultaneous tracking and T0 determination is presented.

  8. Online available capacity prediction and state of charge estimation based on advanced data-driven algorithms for lithium iron phosphate battery

    International Nuclear Information System (INIS)

    Deng, Zhongwei; Yang, Lin; Cai, Yishan; Deng, Hao; Sun, Liu

    2016-01-01

    The key technology of a battery management system is to online estimate the battery states accurately and robustly. For lithium iron phosphate battery, the relationship between state of charge and open circuit voltage has a plateau region which limits the estimation accuracy of voltage-based algorithms. The open circuit voltage hysteresis requires advanced online identification algorithms to cope with the strong nonlinear battery model. The available capacity, as a crucial parameter, contributes to the state of charge and state of health estimation of battery, but it is difficult to predict due to comprehensive influence by temperature, aging and current rates. Aim at above problems, the ampere-hour counting with current correction and the dual adaptive extended Kalman filter algorithms are combined to estimate model parameters and state of charge. This combination presents the advantages of less computation burden and more robustness. Considering the influence of temperature and degradation, the data-driven algorithm namely least squares support vector machine is implemented to predict the available capacity. The state estimation and capacity prediction methods are coupled to improve the estimation accuracy at different temperatures among the lifetime of battery. The experiment results verify the proposed methods have excellent state and available capacity estimation accuracy. - Highlights: • A dual adaptive extended Kalman filter is used to estimate parameters and states. • A correction term is introduced to consider the effect of current rates. • The least square support vector machine is used to predict the available capacity. • The experiment results verify the proposed state and capacity prediction methods.

  9. OxMaR: open source free software for online minimization and randomization for clinical trials.

    Science.gov (United States)

    O'Callaghan, Christopher A

    2014-01-01

    Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  10. OxMaR: open source free software for online minimization and randomization for clinical trials.

    Directory of Open Access Journals (Sweden)

    Christopher A O'Callaghan

    Full Text Available Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  11. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    Science.gov (United States)

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  12. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  13. Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Directory of Open Access Journals (Sweden)

    Juan Pardo

    2015-04-01

    Full Text Available Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.

  14. Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Science.gov (United States)

    Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma

    2015-01-01

    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources. PMID:25905698

  15. Online learning algorithm for time series forecasting suitable for low cost wireless sensor networks nodes.

    Science.gov (United States)

    Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma

    2015-04-21

    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.

  16. Security Analysis of Randomize-Hash-then-Sign Digital Signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2012-01-01

    At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar...... functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online...... 800-106. We discuss some important applications of our attacks and discuss their applicability on signature schemes based on hash functions with ‘built-in’ randomization. Finally, we compare our attacks on randomize-hash-then-sign schemes with the generic forgery attacks on the standard hash...

  17. A Randomized Educational Intervention Trial to Determine the Effect of Online Education on the Quality of Resident-Delivered Care.

    Science.gov (United States)

    Dolan, Brigid M; Yialamas, Maria A; McMahon, Graham T

    2015-09-01

    There is limited research on whether online formative self-assessment and learning can change the behavior of medical professionals. We sought to determine if an adaptive longitudinal online curriculum in bone health would improve resident physicians' knowledge, and change their behavior regarding prevention of fragility fractures in women. We used a randomized control trial design in which 50 internal medicine resident physicians at a large academic practice were randomized to either receive a standard curriculum in bone health care alone, or to receive it augmented with an adaptive, longitudinal, online formative self-assessment curriculum delivered via multiple-choice questions. Outcomes were assessed 10 months after the start of the intervention. Knowledge outcomes were measured by a multiple-choice question examination. Clinical outcomes were measured by chart review, including bone density screening rate, calculation of the fracture risk assessment tool (FRAX) score, and rate of appropriate bisphosphonate prescription. Compared to the control group, residents participating in the intervention had higher scores on the knowledge test at the end of the study. Bone density screening rates and appropriate use of bisphosphonates were significantly higher in the intervention group compared with the control group. FRAX score reporting did not differ between the groups. Residents participating in a novel adaptive online curriculum outperformed peers in knowledge of fragility fracture prevention and care practices to prevent fracture. Online adaptive education can change behavior to improve patient care.

  18. Sleep-Related Safety Behaviors and Dysfunctional Beliefs Mediate the Efficacy of Online CBT for Insomnia: A Randomized Controlled Trial.

    Science.gov (United States)

    Lancee, Jaap; Eisma, Maarten C; van Straten, Annemieke; Kamphuis, Jan H

    2015-01-01

    Several trials have demonstrated the efficacy of online cognitive behavioral therapy (CBT) for insomnia. However, few studies have examined putative mechanisms of change based on the cognitive model of insomnia. Identification of modifiable mechanisms by which the treatment works may guide efforts to further improve the efficacy of insomnia treatment. The current study therefore has two aims: (1) to replicate the finding that online CBT is effective for insomnia and (2) to test putative mechanism of change (i.e., safety behaviors and dysfunctional beliefs). Accordingly, we conducted a randomized controlled trial in which individuals with insomnia were randomized to either online CBT for insomnia (n = 36) or a waiting-list control group (n = 27). Baseline and posttest assessments included questionnaires assessing insomnia severity, safety behaviors, dysfunctional beliefs, anxiety and depression, and a sleep diary. Three- and six-month assessments were administered to the CBT group only. Results show moderate to large statistically significant effects of the online treatment compared to the waiting list on insomnia severity, sleep measures, sleep safety behaviors, and dysfunctional beliefs. Furthermore, dysfunctional beliefs and safety behaviors mediated the effects of treatment on insomnia severity and sleep efficiency. Together, these findings corroborate the efficacy of online CBT for insomnia, and suggest that these effects were produced by changing maladaptive beliefs, as well as safety behaviors. Treatment protocols for insomnia may specifically be enhanced by more focused attention on the comprehensive fading of sleep safety behaviors, for instance through behavioral experiments.

  19. TITRATION: A Randomized Study to Assess 2 Treatment Algorithms with New Insulin Glargine 300 units/mL.

    Science.gov (United States)

    Yale, Jean-François; Berard, Lori; Groleau, Mélanie; Javadi, Pasha; Stewart, John; Harris, Stewart B

    2017-10-01

    It was uncertain whether an algorithm that involves increasing insulin dosages by 1 unit/day may cause more hypoglycemia with the longer-acting insulin glargine 300 units/mL (GLA-300). The objective of this study was to compare safety and efficacy of 2 titration algorithms, INSIGHT and EDITION, for GLA-300 in people with uncontrolled type 2 diabetes mellitus, mainly in a primary care setting. This was a 12-week, open-label, randomized, multicentre pilot study. Participants were randomly assigned to 1 of 2 algorithms: they either increased their dosage by 1 unit/day (INSIGHT, n=108) or the dose was adjusted by the investigator at least once weekly, but no more often than every 3 days (EDITION, n=104). The target fasting self-monitored blood glucose was in the range of 4.4 to 5.6 mmol/L. The percentages of participants reaching the primary endpoint of fasting self-monitored blood glucose ≤5.6 mmol/L without nocturnal hypoglycemia were 19.4% (INSIGHT) and 18.3% (EDITION). At week 12, 26.9% (INSIGHT) and 28.8% (EDITION) of participants achieved a glycated hemoglobin value of ≤7%. No differences in the incidence of hypoglycemia of any category were noted between algorithms. Participants in both arms of the study were much more satisfied with their new treatment as assessed by the Diabetes Treatment Satisfaction Questionnaire. Most health-care professionals (86%) preferred the INSIGHT over the EDITION algorithm. The frequency of adverse events was similar between algorithms. A patient-driven titration algorithm of 1 unit/day with GLA-300 is effective and comparable to the previously tested EDITION algorithm and is preferred by health-care professionals. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  20. The algorithm of random length sequences synthesis for frame synchronization of digital television systems

    Directory of Open Access Journals (Sweden)

    Аndriy V. Sadchenko

    2015-12-01

    Full Text Available Digital television systems need to ensure that all digital signals processing operations are performed simultaneously and consistently. Frame synchronization dictated by the need to match phases of transmitter and receiver so that it would be possible to identify the start of a frame. As a frame synchronization signals are often used long length binary sequence with good aperiodic autocorrelation function. Aim: This work is dedicated to the development of the algorithm of random length sequences synthesis. Materials and Methods: The paper provides a comparative analysis of the known sequences, which can be used at present as synchronization ones, revealed their advantages and disadvantages. This work proposes the algorithm for the synthesis of binary synchronization sequences of random length with good autocorrelation properties based on noise generator with a uniform distribution law of probabilities. A "white noise" semiconductor generator is proposed to use as the initial material for the synthesis of binary sequences with desired properties. Results: The statistical analysis of the initial implementations of the "white noise" and synthesized sequences for frame synchronization of digital television is conducted. The comparative analysis of the synthesized sequences with known ones was carried out. The results show the benefits of obtained sequences in compare with known ones. The performed simulations confirm the obtained results. Conclusions: Thus, the search algorithm of binary synchronization sequences with desired autocorrelation properties received. According to this algorithm, the sequence can be longer in length and without length limitations. The received sync sequence can be used for frame synchronization in modern digital communication systems that will increase their efficiency and noise immunity.

  1. Systematic errors due to linear congruential random-number generators with the Swendsen-Wang algorithm: a warning.

    Science.gov (United States)

    Ossola, Giovanni; Sokal, Alan D

    2004-08-01

    We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).

  2. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    Science.gov (United States)

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Microstructure Reconstruction of Sheet Molding Composite Using a Random Chips Packing Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Tianyu; Xu, Hongyi; Chen, Wei

    2017-04-06

    Fiber-reinforced polymer composites are strong candidates for structural materials to replace steel and light alloys in lightweight vehicle design because of their low density and relatively high strength. In the integrated computational materials engineering (ICME) development of carbon fiber composites, microstructure reconstruction algorithms are needed to generate material microstructure representative volume element (RVE) based on the material processing information. The microstructure RVE reconstruction enables the material property prediction by finite element analysis (FEA)This paper presents an algorithm to reconstruct the microstructure of a chopped carbon fiber/epoxy laminate material system produced by compression molding, normally known as sheet molding compounds (SMC). The algorithm takes the result from material’s manufacturing process as inputs, such as the orientation tensor of fibers, the chopped fiber sheet geometry, and the fiber volume fraction. The chopped fiber sheets are treated as deformable rectangle chips and a random packing algorithm is developed to pack these chips into a square plate. The RVE is built in a layer-by-layer fashion until the desired number of lamina is reached, then a fine tuning process is applied to finalize the reconstruction. Compared to the previous methods, this new approach has the ability to model bended fibers by allowing limited amount of overlaps of rectangle chips. Furthermore, the method does not need SMC microstructure images, for which the image-based characterization techniques have not been mature enough, as inputs. Case studies are performed and the results show that the statistics of the reconstructed microstructures generated by the algorithm matches well with the target input parameters from processing.

  4. Online transfer learning with extreme learning machine

    Science.gov (United States)

    Yin, Haibo; Yang, Yun-an

    2017-05-01

    In this paper, we propose a new transfer learning algorithm for online training. The proposed algorithm, which is called Online Transfer Extreme Learning Machine (OTELM), is based on Online Sequential Extreme Learning Machine (OSELM) while it introduces Semi-Supervised Extreme Learning Machine (SSELM) to transfer knowledge from the source to the target domain. With the manifold regularization, SSELM picks out instances from the source domain that are less relevant to those in the target domain to initialize the online training, so as to improve the classification performance. Experimental results demonstrate that the proposed OTELM can effectively use instances in the source domain to enhance the learning performance.

  5. A wavelet-based ECG delineation algorithm for 32-bit integer online processing.

    Science.gov (United States)

    Di Marco, Luigi Y; Chiari, Lorenzo

    2011-04-03

    Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS) or floating point algebra, which are computationally demanding. This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats) and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats). The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra.

  6. On grey levels in random CAPTCHA generation

    Science.gov (United States)

    Newton, Fraser; Kouritzin, Michael A.

    2011-06-01

    A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.

  7. Identifying online user reputation of user-object bipartite networks

    Science.gov (United States)

    Liu, Xiao-Lu; Liu, Jian-Guo; Yang, Kai; Guo, Qiang; Han, Jing-Ti

    2017-02-01

    Identifying online user reputation based on the rating information of the user-object bipartite networks is important for understanding online user collective behaviors. Based on the Bayesian analysis, we present a parameter-free algorithm for ranking online user reputation, where the user reputation is calculated based on the probability that their ratings are consistent with the main part of all user opinions. The experimental results show that the AUC values of the presented algorithm could reach 0.8929 and 0.8483 for the MovieLens and Netflix data sets, respectively, which is better than the results generated by the CR and IARR methods. Furthermore, the experimental results for different user groups indicate that the presented algorithm outperforms the iterative ranking methods in both ranking accuracy and computation complexity. Moreover, the results for the synthetic networks show that the computation complexity of the presented algorithm is a linear function of the network size, which suggests that the presented algorithm is very effective and efficient for the large scale dynamic online systems.

  8. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    Science.gov (United States)

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Random walks in nanotube composites: Improved algorithms and the role of thermal boundary resistance

    International Nuclear Information System (INIS)

    Duong, Hai M.; Papavassiliou, Dimitrios V.; Lee, Lloyd L.; Mullen, Kieran J.

    2005-01-01

    Random walk simulations of thermal walkers are used to study the effect of interfacial resistance on heat flow in randomly dispersed carbon nanotube composites. The adopted algorithm effectively makes the thermal conductivity of the nanotubes themselves infinite. The probability that a walker colliding with a matrix-nanotube interface reflects back into the matrix phase or crosses into the carbon nanotube phase is determined by the thermal boundary (Kapitza) resistance. The use of 'cold' and 'hot' walkers produces a steady state temperature profile that allows accurate determination of the thermal conductivity. The effects of the carbon nanotube orientation, aspect ratio, volume fraction, and Kapitza resistance on the composite effective conductivity are quantified

  10. Supplementary Material for: Tukey g-and-h Random Fields

    KAUST Repository

    Xu, Ganggang

    2016-01-01

    We propose a new class of transGaussian random fields named Tukey g-and-h (TGH) random fields to model non-Gaussian spatial data. The proposed TGH random fields have extremely flexible marginal distributions, possibly skewed and/or heavy-tailed, and, therefore, have a wide range of applications. The special formulation of the TGH random field enables an automatic search for the most suitable transformation for the dataset of interest while estimating model parameters. Asymptotic properties of the maximum likelihood estimator and the probabilistic properties of the TGH random fields are investigated. An efficient estimation procedure, based on maximum approximated likelihood, is proposed and an extreme spatial outlier detection algorithm is formulated. Kriging and probabilistic prediction with TGH random fields are developed along with prediction confidence intervals. The predictive performance of TGH random fields is demonstrated through extensive simulation studies and an application to a dataset of total precipitation in the south east of the United States. Supplementary materials for this article are available online.

  11. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  12. Tag-Driven Online Novel Recommendation with Collaborative Item Modeling

    Directory of Open Access Journals (Sweden)

    Fenghuan Li

    2018-04-01

    Full Text Available Online novel recommendation recommends attractive novels according to the preferences and characteristics of users or novels and is increasingly touted as an indispensable service of many online stores and websites. The interests of the majority of users remain stable over a certain period. However, there are broad categories in the initial recommendation list achieved by collaborative filtering (CF. That is to say, it is very possible that there are many inappropriately recommended novels. Meanwhile, most algorithms assume that users can provide an explicit preference. However, this assumption does not always hold, especially in online novel reading. To solve these issues, a tag-driven algorithm with collaborative item modeling (TDCIM is proposed for online novel recommendation. Online novel reading is different from traditional book marketing and lacks preference rating. In addition, collaborative filtering frequently suffers from the Matthew effect, leading to ignored personalized recommendations and serious long tail problems. Therefore, item-based CF is improved by latent preference rating with a punishment mechanism based on novel popularity. Consequently, a tag-driven algorithm is constructed by means of collaborative item modeling and tag extension. Experimental results show that online novel recommendation is improved greatly by a tag-driven algorithm with collaborative item modeling.

  13. Almagest, a new trackless ring finding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lamanna, G., E-mail: gianluca.lamanna@cern.ch

    2014-12-01

    A fast ring finding algorithm is a crucial point to allow the use of RICH in on-line trigger selection. The present algorithms are either too slow (with respect to the incoming data rate) or need the information coming from a tracking system. Digital image techniques, assuming limited computing power (as for example Hough transform), are not perfectly robust for what concerns the noise immunity. We present a novel technique based on Ptolemy's theorem for multi-ring pattern recognition. Starting from purely geometrical considerations, this algorithm (also known as “Almagest”) allows fast and trackless rings reconstruction, with spatial resolution comparable with other offline techniques. Almagest is particularly suitable for parallel implementation on multi-cores machines. Preliminary tests on GPUs (multi-cores video card processors) show that, thanks to an execution time smaller than 10 μs per event, this algorithm could be employed for on-line selection in trigger systems. The user case of the NA62 RICH trigger, based on GPU, will be discussed. - Highlights: • A new algorithm for fast multiple ring searching in RICH detectors is presented. • The Almagest algorithm exploits the computing power of Graphics processers (GPUs). • A preliminary implementation for on-line triggering in the NA62 experiment shows encouraging results.

  14. Species-specific audio detection: a comparison of three template-based detection algorithms using random forests

    Directory of Open Access Journals (Sweden)

    Carlos J. Corrada Bravo

    2017-04-01

    Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.

  15. Advice Complexity of the Online Search Problem

    DEFF Research Database (Denmark)

    Clemente, Jhoirene; Hromkovič, Juraj; Komm, Dennis

    2016-01-01

    the minimum amount of information needed in order to achieve a certain competitive ratio. We design an algorithm that reads $b$ bits of advice and achieves a competitive ratio of (M/m)^{1/(2^b+1)} where M and m are the maximum and minimum price in the input. We also give a matching lower bound. Furthermore......The online search problem is a fundamental problem in finance. The numerous direct applications include searching for optimal prices for commodity trading and trading foreign currencies. In this paper, we analyze the advice complexity of this problem. In particular, we are interested in identifying......, we compare the power of advice and randomization for this problem....

  16. FPGA helix tracking algorithm for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yutie; Galuska, Martin; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David; Spruck, Bjoern [II. Physikalisches Institut, Giessen University (Germany); Ye, Hua [Institute of High Energy Physics, Beijing (China); Collaboration: PANDA-Collaboration

    2015-07-01

    The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking algorithm for helix tracking reconstruction in the solenoidal field is shown. The tracking algorithm is composed by two parts, a road finding module followed by an iterative helix parameter calculation module. A performance study using C++ and the status of the VHDL implementation are presented.

  17. Seamless Merging of Hypertext and Algorithm Animation

    Science.gov (United States)

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  18. A randomized trial found online questionnaires supplemented by postal reminders generated a cost-effective and generalizable sample but don't forget the reminders.

    Science.gov (United States)

    Loban, Amanda; Mandefield, Laura; Hind, Daniel; Bradburn, Mike

    2017-12-01

    The objective of this study was to compare the response rates, data completeness, and representativeness of survey data produced by online and postal surveys. A randomized trial nested within a cohort study in Yorkshire, United Kingdom. Participants were randomized to receive either an electronic (online) survey questionnaire with paper reminder (N = 2,982) or paper questionnaire with electronic reminder (N = 2,855). Response rates were similar for electronic contact and postal contacts (50.9% vs. 49.7%, difference = 1.2%, 95% confidence interval: -1.3% to 3.8%). The characteristics of those responding to the two groups were similar. Participants nevertheless demonstrated an overwhelming preference for postal questionnaires, with the majority responding by post in both groups. Online survey questionnaire systems need to be supplemented with a postal reminder to achieve acceptable uptake, but doing so provides a similar response rate and case mix when compared to postal questionnaires alone. For large surveys, online survey systems may be cost saving. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Cooperative mobile agents search using beehive partitioned structure and Tabu Random search algorithm

    Science.gov (United States)

    Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.

    2013-05-01

    In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.

  20. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination

    Science.gov (United States)

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.

  1. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    Science.gov (United States)

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  2. Seismic noise attenuation using an online subspace tracking algorithm

    NARCIS (Netherlands)

    Zhou, Yatong; Li, Shuhua; Zhang, D.; Chen, Yangkang

    2018-01-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient

  3. Improving staff response to seizures on the epilepsy monitoring unit with online EEG seizure detection algorithms.

    Science.gov (United States)

    Rommens, Nicole; Geertsema, Evelien; Jansen Holleboom, Lisanne; Cox, Fieke; Visser, Gerhard

    2018-05-11

    User safety and the quality of diagnostics on the epilepsy monitoring unit (EMU) depend on reaction to seizures. Online seizure detection might improve this. While good sensitivity and specificity is reported, the added value above staff response is unclear. We ascertained the added value of two electroencephalograph (EEG) seizure detection algorithms in terms of additional detected seizures or faster detection time. EEG-video seizure recordings of people admitted to an EMU over one year were included, with a maximum of two seizures per subject. All recordings were retrospectively analyzed using Encevis EpiScan and BESA Epilepsy. Detection sensitivity and latency of the algorithms were compared to staff responses. False positive rates were estimated on 30 uninterrupted recordings (roughly 24 h per subject) of consecutive subjects admitted to the EMU. EEG-video recordings used included 188 seizures. The response rate of staff was 67%, of Encevis 67%, and of BESA Epilepsy 65%. Of the 62 seizures missed by staff, 66% were recognized by Encevis and 39% by BESA Epilepsy. The median latency was 31 s (staff), 10 s (Encevis), and 14 s (BESA Epilepsy). After correcting for walking time from the observation room to the subject, both algorithms detected faster than staff in 65% of detected seizures. The full recordings included 617 h of EEG. Encevis had a median false positive rate of 4.9 per 24 h and BESA Epilepsy of 2.1 per 24 h. EEG-video seizure detection algorithms may improve reaction to seizures by improving the total number of seizures detected and the speed of detection. The false positive rate is feasible for use in a clinical situation. Implementation of these algorithms might result in faster diagnostic testing and better observation during seizures. Copyright © 2018. Published by Elsevier Inc.

  4. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  5. Using Conditional Random Fields to Extract Contexts and Answers of Questions from Online Forums

    DEFF Research Database (Denmark)

    Ding, Shilin; Cong, Gao; Lin, Chin-Yew

    2008-01-01

    Online forum discussions often contain vast amounts of questions that are the focuses of discussions. Extracting contexts and answers together with the questions will yield not only a coherent forum summary but also a valuable QA knowledge base. In this paper, we propose a general framework based...... on Conditional Random Fields (CRFs) to detect the contexts and answers of questions from forum threads. We improve the basic framework by Skip-chain CRFs and 2D CRFs to better accommodate the features of forums for better performance. Experimental results show that our techniques are very promising....

  6. Multi-dueling bandits and their application to online ranker evaluation

    DEFF Research Database (Denmark)

    Brost, Brian; Seldin, Yevgeny; Cox, Ingemar Johansson

    2016-01-01

    is the best. Online ranker evaluation can be modeled by dueling ban- dits, a mathematical model for online learning under limited feedback from pairwise comparisons. Comparisons of pairs of rankers is performed by interleaving their result sets and examining which documents users click on. The dueling bandits......New ranking algorithms are continually being developed and refined, necessitating the development of efficient methods for evaluating these rankers. Online ranker evaluation focuses on the challenge of efficiently determining, from implicit user feedback, which ranker out of a finite set of rankers...... experimental results show that the algorithm yields orders of magnitude improvement in performance compared to stateof- the-art dueling bandit algorithms....

  7. Online Censoring for Large-Scale Regressions with Application to Streaming Big Data.

    Science.gov (United States)

    Berberidis, Dimitris; Kekatos, Vassilis; Giannakis, Georgios B

    2016-08-01

    On par with data-intensive applications, the sheer size of modern linear regression problems creates an ever-growing demand for efficient solvers. Fortunately, a significant percentage of the data accrued can be omitted while maintaining a certain quality of statistical inference with an affordable computational budget. This work introduces means of identifying and omitting less informative observations in an online and data-adaptive fashion. Given streaming data, the related maximum-likelihood estimator is sequentially found using first- and second-order stochastic approximation algorithms. These schemes are well suited when data are inherently censored or when the aim is to save communication overhead in decentralized learning setups. In a different operational scenario, the task of joint censoring and estimation is put forth to solve large-scale linear regressions in a centralized setup. Novel online algorithms are developed enjoying simple closed-form updates and provable (non)asymptotic convergence guarantees. To attain desired censoring patterns and levels of dimensionality reduction, thresholding rules are investigated too. Numerical tests on real and synthetic datasets corroborate the efficacy of the proposed data-adaptive methods compared to data-agnostic random projection-based alternatives.

  8. The 4A Metric Algorithm: A Unique E-Learning Engineering Solution Designed via Neuroscience to Counter Cheating and Reduce Its Recidivism by Measuring Student Growth through Systemic Sequential Online Learning

    Science.gov (United States)

    Osler, James Edward

    2016-01-01

    This paper provides a novel instructional methodology that is a unique E-Learning engineered "4A Metric Algorithm" designed to conceptually address the four main challenges faced by 21st century students, who are tempted to cheat in a myriad of higher education settings (face to face, hybrid, and online). The algorithmic online…

  9. Fractional Fourier domain optical image hiding using phase retrieval algorithm based on iterative nonlinear double random phase encoding.

    Science.gov (United States)

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2014-09-22

    We present a novel image hiding method based on phase retrieval algorithm under the framework of nonlinear double random phase encoding in fractional Fourier domain. Two phase-only masks (POMs) are efficiently determined by using the phase retrieval algorithm, in which two cascaded phase-truncated fractional Fourier transforms (FrFTs) are involved. No undesired information disclosure, post-processing of the POMs or digital inverse computation appears in our proposed method. In order to achieve the reduction in key transmission, a modified image hiding method based on the modified phase retrieval algorithm and logistic map is further proposed in this paper, in which the fractional orders and the parameters with respect to the logistic map are regarded as encryption keys. Numerical results have demonstrated the feasibility and effectiveness of the proposed algorithms.

  10. Recruitment of Participants and Delivery of Online Mental Health Resources for Depressed Individuals Using Tumblr: Pilot Randomized Control Trial.

    Science.gov (United States)

    Kelleher, Erin; Moreno, Megan; Wilt, Megan Pumper

    2018-04-12

    Adolescents and young adults frequently post depression symptom references on social media; previous studies show positive associations between depression posts and self-reported depression symptoms. Depression is common among young people and this population often experiences many barriers to mental health care. Thus, social media may be a new resource to identify, recruit, and intervene with young people at risk for depression. The purpose of this pilot study was to test a social media intervention on Tumblr. We used social media to identify and recruit participants and to deliver the intervention of online depression resources. This randomized pilot intervention identified Tumblr users age 15-23 who posted about depression using the search term "#depress". Eligible participants were recruited via Tumblr messages; consented participants completed depression surveys and were then randomized to an intervention of online mental health resources delivered via a Tumblr message, while control participants did not receive resources. Postintervention online surveys assessed resource access and usefulness and control groups were asked whether they would have liked to receive resources. Analyses included t tests. A total of 25 participants met eligibility criteria. The mean age of the participants was 17.5 (SD 1.9) and 65% were female with average score on the Patient Health Questionnaire-9 of 17.5 (SD 5.9). Among the 11 intervention participants, 36% (4/11) reported accessing intervention resources and 64% (7/11) felt the intervention was acceptable. Among the 14 control participants, only 29% (4/14) of reported that receiving resources online would be acceptable (P=.02). Participants suggested anonymity and ease of use as important characteristics in an online depression resource. The intervention was appropriately targeted to young people at risk for depression, and recruitment via Tumblr was feasible. Most participants in the intervention group felt the social media

  11. A Randomized Controlled Trial of the Effects of Online Pain Management Education on Primary Care Providers.

    Science.gov (United States)

    Trudeau, Kimberlee J; Hildebrand, Cristina; Garg, Priyanka; Chiauzzi, Emil; Zacharoff, Kevin L

    2017-04-01

    To improve pain management practices, we developed an online interactive continuing education (CE) program for primary care providers (PCPs). This program follows the flow of clinical decision-making through simulated cases at critical pain treatment points along the pain treatment continuum. A randomized controlled trial was conducted to test the efficacy of this program. Participants were randomized to either the experimental condition or the control condition (online, text-based CE program). A total of 238 primary care providers were recruited through hospitals, professional newsletters, and pain conferences. Participants in both conditions reported significantly improved scores on knowledge (KNOW-PAIN 50), attitudes (CAOS), and pain practice behaviors (PPBS) scales over the four-month study. The experimental condition showed significantly greater change over time on the tamper-resistant formulations (TRFs) of opioids and dosing CAOS subscale compared with the control condition. Post hoc comparisons suggested that participants in the experimental condition were less likely to endorse use of opioid TRFs over time compared with the control condition. Exploratory analyses for potential moderators indicated a significant three-way interaction with time, condition, and discipline (i.e., physician vs other) for the impediments and concerns attitudes subscale and the early refill behaviors subscale. Post hoc comparisons indicated that physicians in the experimental condition exhibited the greatest change in attitudes and the nonphysicians exhibited the greatest change in reported behaviors in response to requests for early refills. Findings suggest online CE programs may positively impact PCPs' knowledge, attitudes, and pain practice behaviors but provide minimal evidence for the value of including interactivity. © 2016 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  12. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    Science.gov (United States)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  13. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  14. Design and implementation of adaptive inverse control algorithm for a micro-hand control system

    Directory of Open Access Journals (Sweden)

    Wan-Cheng Wang

    2014-01-01

    Full Text Available The Letter proposes an online tuned adaptive inverse position control algorithm for a micro-hand. First, the configuration of the micro-hand is discussed. Next, a kinematic analysis of the micro-hand is investigated and then the relationship between the rotor position of micro-permanent magnet synchronous motor and the tip of the micro-finger is derived. After that, an online tuned adaptive inverse control algorithm, which includes an adaptive inverse model and an adaptive inverse control, is designed. The online tuned adaptive inverse control algorithm has better performance than the proportional–integral control algorithm does. In addition, to avoid damaging the object during the grasping process, an online force control algorithm is proposed here as well. An embedded micro-computer, cRIO-9024, is used to realise the whole position control algorithm and the force control algorithm by using software. As a result, the hardware circuit is very simple. Experimental results show that the proposed system can provide fast transient responses, good load disturbance responses, good tracking responses and satisfactory grasping responses.

  15. Project connect online: randomized trial of an internet-based program to chronicle the cancer experience and facilitate communication.

    Science.gov (United States)

    Stanton, Annette L; Thompson, Elizabeth H; Crespi, Catherine M; Link, John S; Waisman, James R

    2013-09-20

    Evidence suggests that expressing emotions related to cancer and receiving interpersonal support can promote psychological and physical health in women diagnosed with breast cancer. However, adaptive expression of feelings and communication with one's social network can pose challenges for patients with cancer. We report on a randomized controlled trial of an intervention, Project Connect Online, for patients with breast cancer to create personal Web sites to chronicle their experience and communicate with their social network. Women (N = 88) diagnosed with breast cancer (any stage, any interval since diagnosis) were randomly assigned to participate in a 3-hour workshop for hands-on creation of personal Web sites with a follow-up call to facilitate Web site use, or to a waiting-list control. Assessed before randomization and 6 months after the intervention, dependent variables included depressive symptoms, positive and negative mood, cancer-related intrusive thoughts, and perceived cancer-related benefits in life appreciation and strengthened relationships. Relative to control participants, women randomly assigned to Project Connect Online evidenced significant benefit 6 months later on depressive symptoms, positive mood, and life appreciation, but not negative mood, perceived strengthened relationships, or intrusive thoughts. Treatment status moderated the intervention effects, such that women currently undergoing medical treatment for cancer benefitted significantly more from the intervention on depressive symptoms and positive mood than did women not receiving treatment. Findings suggest the promise of an intervention to facilitate the ability of women diagnosed with breast cancer to chronicle their experience and communicate with their social network via the Internet.

  16. Hydraulic Pump Fault Diagnosis Control Research Based on PARD-BP Algorithm

    Directory of Open Access Journals (Sweden)

    LV Dongmei

    2014-12-01

    Full Text Available Combining working principle and failure mechanism of RZU2000HM hydraulic press, with its present fault cases being collected, the working principle of the oil pressure and faults phenomenon of the hydraulic power unit –swash-plate axial piston pump were studied with some emphasis, whose faults will directly affect the dynamic performance of the oil pressure and flow. In order to make hydraulic power unit work reliably, PARD-BP (Pruning Algorithm based Random Degree neural network fault algorithm was introduced, with swash-plate axial piston pump’s vibration fault sample data regarded as input, and fault mode matrix regarded as target output, so that PARD-BP algorithm could be trained. In the end, the vibration results were verified by the vibration modal test, and it was shown that the biggest upward peaks of vacuum pump in X-direction, Y-direction and Z- direction have fallen by 30.49 %, 21.13 % and 18.73 % respectively, so that the reliability of the fact that PARD-BP algorithm could be used for the online fault detection and diagnosis of the hydraulic pump was verified.

  17. New Algorithm of Automatic Complex Password Generator Employing Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sura Jasim Mohammed

    2018-01-01

    Full Text Available Due to the occurred increasing in information sharing, internet popularization, E-commerce transactions, and data transferring, security and authenticity become an important and necessary subject. In this paper an automated schema was proposed to generate a strong and complex password which is based on entering initial data such as text (meaningful and simple information or not, with the concept of encoding it, then employing the Genetic Algorithm by using its operations crossover and mutation to generated different data from the entered one. The generated password is non-guessable and can be used in many and different applications and internet services like social networks, secured system, distributed systems, and online services. The proposed password generator achieved diffusion, randomness, and confusions, which are very necessary, required and targeted in the resulted password, in addition to the notice that the length of the generated password differs from the length of initial data, and any simple changing and modification in the initial data produces more and clear modification in the generated password. The proposed work was done using visual basic programing language.

  18. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Czech Academy of Sciences Publication Activity Database

    Chlumecký, M.; Buchtele, Josef; Richta, K.

    2017-01-01

    Roč. 553, October (2017), s. 350-355 ISSN 0022-1694 Institutional support: RVO:67985874 Keywords : genetic algorithm * optimisation * rainfall-runoff modeling * random generator Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Hydrology Impact factor: 3.483, year: 2016 https://ac.els-cdn.com/S0022169417305516/1-s2.0-S0022169417305516-main.pdf?_tid=fa1bad8a-bd6a-11e7-8567-00000aab0f27&acdnat=1509365462_a1335d3d997e9eab19e23b1eee977705

  19. On-line monitoring of extraction process of Flos Lonicerae Japonicae using near infrared spectroscopy combined with synergy interval PLS and genetic algorithm

    Science.gov (United States)

    Yang, Yue; Wang, Lei; Wu, Yongjiang; Liu, Xuesong; Bi, Yuan; Xiao, Wei; Chen, Yong

    2017-07-01

    There is a growing need for the effective on-line process monitoring during the manufacture of traditional Chinese medicine to ensure quality consistency. In this study, the potential of near infrared (NIR) spectroscopy technique to monitor the extraction process of Flos Lonicerae Japonicae was investigated. A new algorithm of synergy interval PLS with genetic algorithm (Si-GA-PLS) was proposed for modeling. Four different PLS models, namely Full-PLS, Si-PLS, GA-PLS, and Si-GA-PLS, were established, and their performances in predicting two quality parameters (viz. total acid and soluble solid contents) were compared. In conclusion, Si-GA-PLS model got the best results due to the combination of superiority of Si-PLS and GA. For Si-GA-PLS, the determination coefficient (Rp2) and root-mean-square error for the prediction set (RMSEP) were 0.9561 and 147.6544 μg/ml for total acid, 0.9062 and 0.1078% for soluble solid contents, correspondingly. The overall results demonstrated that the NIR spectroscopy technique combined with Si-GA-PLS calibration is a reliable and non-destructive alternative method for on-line monitoring of the extraction process of TCM on the production scale.

  20. Identifying and Analyzing Novel Epilepsy-Related Genes Using Random Walk with Restart Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Guo

    2017-01-01

    Full Text Available As a pathological condition, epilepsy is caused by abnormal neuronal discharge in brain which will temporarily disrupt the cerebral functions. Epilepsy is a chronic disease which occurs in all ages and would seriously affect patients’ personal lives. Thus, it is highly required to develop effective medicines or instruments to treat the disease. Identifying epilepsy-related genes is essential in order to understand and treat the disease because the corresponding proteins encoded by the epilepsy-related genes are candidates of the potential drug targets. In this study, a pioneering computational workflow was proposed to predict novel epilepsy-related genes using the random walk with restart (RWR algorithm. As reported in the literature RWR algorithm often produces a number of false positive genes, and in this study a permutation test and functional association tests were implemented to filter the genes identified by RWR algorithm, which greatly reduce the number of suspected genes and result in only thirty-three novel epilepsy genes. Finally, these novel genes were analyzed based upon some recently published literatures. Our findings implicate that all novel genes were closely related to epilepsy. It is believed that the proposed workflow can also be applied to identify genes related to other diseases and deepen our understanding of the mechanisms of these diseases.

  1. A Randomized Crossover Design to Assess Learning Impact and Student Preference for Active and Passive Online Learning Modules.

    Science.gov (United States)

    Prunuske, Amy J; Henn, Lisa; Brearley, Ann M; Prunuske, Jacob

    Medical education increasingly involves online learning experiences to facilitate the standardization of curriculum across time and space. In class, delivering material by lecture is less effective at promoting student learning than engaging students in active learning experience and it is unclear whether this difference also exists online. We sought to evaluate medical student preferences for online lecture or online active learning formats and the impact of format on short- and long-term learning gains. Students participated online in either lecture or constructivist learning activities in a first year neurologic sciences course at a US medical school. In 2012, students selected which format to complete and in 2013, students were randomly assigned in a crossover fashion to the modules. In the first iteration, students strongly preferred the lecture modules and valued being told "what they need to know" rather than figuring it out independently. In the crossover iteration, learning gains and knowledge retention were found to be equivalent regardless of format, and students uniformly demonstrated a strong preference for the lecture format, which also on average took less time to complete. When given a choice for online modules, students prefer passive lecture rather than completing constructivist activities, and in the time-limited environment of medical school, this choice results in similar performance on multiple-choice examinations with less time invested. Instructors need to look more carefully at whether assessments and learning strategies are helping students to obtain self-directed learning skills and to consider strategies to help students learn to value active learning in an online environment.

  2. Sequential Change-Point Detection via Online Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2018-02-01

    Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

  3. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    Science.gov (United States)

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  4. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  5. Evolutionary online behaviour learning and adaptation in real robots.

    Science.gov (United States)

    Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne

    2017-07-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.

  6. Ontology-based topic clustering for online discussion data

    Science.gov (United States)

    Wang, Yongheng; Cao, Kening; Zhang, Xiaoming

    2013-03-01

    With the rapid development of online communities, mining and extracting quality knowledge from online discussions becomes very important for the industrial and marketing sector, as well as for e-commerce applications and government. Most of the existing techniques model a discussion as a social network of users represented by a user-based graph without considering the content of the discussion. In this paper we propose a new multilayered mode to analysis online discussions. The user-based and message-based representation is combined in this model. A novel frequent concept sets based clustering method is used to cluster the original online discussion network into topic space. Domain ontology is used to improve the clustering accuracy. Parallel methods are also used to make the algorithms scalable to very large data sets. Our experimental study shows that the model and algorithms are effective when analyzing large scale online discussion data.

  7. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  8. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  9. Online evolution of robot behaviour

    OpenAIRE

    Silva, Fernando Goulart da

    2012-01-01

    Tese de mestrado em Engenharia Informática (Interação e Conhecimento), apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2012 In this dissertation, we propose and evaluate two novel approaches to the online synthesis of neural controllers for autonomous robots. The first approach is odNEAT, an online, distributed, and decentralized version of NeuroEvolution of Augmenting Topologies (NEAT). odNEAT is an algorithm for online evolution in groups of embodied agents such a...

  10. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  11. An online spaced-education game among clinicians improves their patients' time to blood pressure control: a randomized controlled trial.

    Science.gov (United States)

    Kerfoot, B Price; Turchin, Alexander; Breydo, Eugene; Gagnon, David; Conlin, Paul R

    2014-05-01

    Many patients with high blood pressure (BP) do not have antihypertensive medications appropriately intensified at clinician visits. We investigated whether an online spaced-education (SE) game among primary care clinicians can decrease time to BP target among their hypertensive patients. A 2-arm randomized trial was conducted over 52 weeks among primary care clinicians at 8 hospitals. Educational content consisted of 32 validated multiple-choice questions with explanations on hypertension management. Providers were randomized into 2 groups: SE clinicians were enrolled in the game, whereas control clinicians received identical educational content in an online posting. SE game clinicians were e-mailed 1 question every 3 days. Adaptive game mechanics resent questions in 12 or 24 days if answered incorrectly or correctly, respectively. Clinicians retired questions by answering each correctly twice consecutively. Posting of relative performance among peers fostered competition. Primary outcome measure was time to BP target (game was completed by 87% of clinicians (48/55), whereas 84% of control clinicians (47/56) read the online posting. In multivariable analysis of 17 866 hypertensive periods among 14 336 patients, the hazard ratio for time to BP target in the SE game cohort was 1.043 (95% confidence interval, 1.007-1.081; P=0.018). The number of hypertensive episodes needed to treat to normalize one additional patient's BP was 67.8. The number of clinicians needed to teach to achieve this was 0.43. An online SE game among clinicians generated a modest but significant reduction in the time to BP target among their hypertensive patients. http://www.clinicaltrials.gov. Unique identifier: NCT00904007. © 2014 American Heart Association, Inc.

  12. Online Hashing for Scalable Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Peng Li

    2018-05-01

    Full Text Available Recently, hashing-based large-scale remote sensing (RS image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method.

  13. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    Science.gov (United States)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  14. Remote-online case-based learning: A comparison of remote-online and face-to-face, case-based learning - a randomized controlled trial.

    Science.gov (United States)

    Nicklen, Peter; Keating, Jenny L; Paynter, Sophie; Storr, Michael; Maloney, Stephen

    2016-01-01

    Case-based learning (CBL) is an educational approach where students work in small, collaborative groups to solve problems. Computer assisted learning (CAL) is the implementation of computer technology in education. The purpose of this study was to compare the effects of a remote-online CBL (RO-CBL) with traditional face-to-face CBL on learning the outcomes of undergraduate physiotherapy students. Participants were randomized to either the control (face-to-face CBL) or to the CAL intervention (RO-CBL). The entire 3rd year physiotherapy cohort (n = 41) at Monash University, Victoria, Australia, were invited to participate in the randomized controlled trial. Outcomes included a postintervention multiple-choice test evaluating the knowledge gained from the CBL, a self-assessment of learning based on examinable learning objectives and student satisfaction with the CBL. In addition, a focus group was conducted investigating perceptions and responses to the online format. Thirty-eight students (control n = 19, intervention n = 19) participated in two CBL sessions and completed the outcome assessments. CBL median scores for the postintervention multiple-choice test were comparable (Wilcoxon rank sum P = 0.61) (median/10 [range] intervention group: 9 [8-10] control group: 10 [7-10]). Of the 15 examinable learning objectives, eight were significantly in favor of the control group, suggesting a greater perceived depth of learning. Eighty-four percent of students (16/19) disagreed with the statement "I enjoyed the method of CBL delivery." Key themes identified from the focus group included risks associated with the implementation of, challenges of communicating in, and flexibility offered, by web-based programs. RO-CBL appears to provide students with a comparable learning experience to traditional CBL. Procedural and infrastructure factors need to be addressed in future studies to counter student dissatisfaction and decreased perceived depth of learning.

  15. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  16. The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding

    Directory of Open Access Journals (Sweden)

    Yuwei Cui

    2017-11-01

    Full Text Available Hierarchical temporal memory (HTM provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP. The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  17. The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff

    2017-01-01

    Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  18. FPGA helix tracking algorithm for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yutie; Galuska, Martin; Gessler, Thomas; Hu, Jifeng; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David; Spruck, Bjoern [II. Physikalisches, Giessen University (Germany); Ye, Hua [II. Physikalisches, Giessen University (Germany); Institute of High Energy Physics, Beijing (China); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed in VHDL (Very High Speed Integrated Circuit Hardware Description Language) on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking finding algorithm for helix track reconstruction in the solenoidal field is shown. A performance study using C++ and the status of the VHDL implementation are presented.

  19. R and D study on on-line criticality surveillance system (V)

    International Nuclear Information System (INIS)

    Yamada, Sumasu

    2001-02-01

    In view of necessity and importance of criticality surveillance systems for ensuring the safety of nuclear fuel manufacturing and reprocessing plants, 5-year basic studies and 4 year R and D studies on an on-line criticality surveillance system were carried out since 1991. This report is a summary of these series of studies. Noticing that the signal from a neutron detector is random in principle, these series of studies aimed to accumulate knowledge for developing an inexpensive criticality surveillance system with quick response based on the Auto-Regressive Moving Average (ARMA) model identification algorithm. During five-year basic studies on criticality surveillance system since 1991, we obtained knowledge required for developing a criticality surveillance system based on the ARMA model identification algorithm through 1) studies on recursive ARMA model identification algorithms most appropriate for estimating subcriticality form time series data under a steady state condition, 2) studies on pre-processing of signal from neutron detectors, 3) developing a new recursive ARMA model identification algorithm with small time delay to estimate time-dependent subcriticality, 4) proposing a basic concept for the elements required for an on-line criticality surveillance system, and 5) numerical analysis of data from the DCA experiments. During next four-year R and D studies on a criticality surveillance system since 1996, we 1) proposed modules required for a no-line criticality surveillance system, 2) revealed effectiveness of a adaptive digital filter (ADF) algorithm, as an important redundancy to the recursive ARMA model identification algorithm to be used in the signal processing module through numerical analysis of real data, 3) proposed a module of the Feynman-α method over γ ray signal and a fast signal processing module for γ ray signal, 4) developed a line-noise removal filter(Notch filter) and revealed its effectiveness for the DCA data corrupted with power

  20. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination.

    Science.gov (United States)

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer

  1. Supplementary Material for: Tukey g-and-h Random Fields

    KAUST Repository

    Xu, Ganggang; Genton, Marc G.

    2016-01-01

    We propose a new class of transGaussian random fields named Tukey g-and-h (TGH) random fields to model non-Gaussian spatial data. The proposed TGH random fields have extremely flexible marginal distributions, possibly skewed and/or heavy-tailed, and, therefore, have a wide range of applications. The special formulation of the TGH random field enables an automatic search for the most suitable transformation for the dataset of interest while estimating model parameters. Asymptotic properties of the maximum likelihood estimator and the probabilistic properties of the TGH random fields are investigated. An efficient estimation procedure, based on maximum approximated likelihood, is proposed and an extreme spatial outlier detection algorithm is formulated. Kriging and probabilistic prediction with TGH random fields are developed along with prediction confidence intervals. The predictive performance of TGH random fields is demonstrated through extensive simulation studies and an application to a dataset of total precipitation in the south east of the United States. Supplementary materials for this article are available online.

  2. MODIS 250m burned area mapping based on an algorithm using change point detection and Markov random fields.

    Science.gov (United States)

    Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca

    2013-04-01

    Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.

  3. A Team-Based Online Game Improves Blood Glucose Control in Veterans With Type 2 Diabetes: A Randomized Controlled Trial.

    Science.gov (United States)

    Kerfoot, B Price; Gagnon, David R; McMahon, Graham T; Orlander, Jay D; Kurgansky, Katherine E; Conlin, Paul R

    2017-09-01

    Rigorous evidence is lacking whether online games can improve patients' longer-term health outcomes. We investigated whether an online team-based game delivering diabetes self-management education (DSME) to patients via e-mail or mobile application (app) can generate longer-term improvements in hemoglobin A 1c (HbA 1c ). Patients ( n = 456) on oral diabetes medications with HbA 1c ≥58 mmol/mol were randomly assigned between a DSME game (with a civics booklet) and a civics game (with a DSME booklet). The 6-month games sent two questions twice weekly via e-mail or mobile app. Participants accrued points based on performance, with scores posted on leaderboards. Winning teams and individuals received modest financial rewards. Our primary outcome measure was HbA 1c change over 12 months. DSME game patients had significantly greater HbA 1c reductions over 12 months than civics game patients (-8 mmol/mol [95% CI -10 to -7] and -5 mmol/mol [95% CI -7 to -3], respectively; P = 0.048). HbA 1c reductions were greater among patients with baseline HbA 1c >75 mmol/mol: -16 mmol/mol [95% CI -21 to -12] and -9 mmol/mol [95% CI -14 to -5] for DSME and civics game patients, respectively; P = 0.031. Patients with diabetes who were randomized to an online game delivering DSME demonstrated sustained and meaningful HbA 1c improvements. Among patients with poorly controlled diabetes, the DSME game reduced HbA 1c by a magnitude comparable to starting a new diabetes medication. Online games may be a scalable approach to improve outcomes among geographically dispersed patients with diabetes and other chronic diseases. © 2017 by the American Diabetes Association.

  4. A study protocol of a three-group randomized feasibility trial of an online yoga intervention for mothers after stillbirth (The Mindful Health Study).

    Science.gov (United States)

    Huberty, Jennifer; Matthews, Jeni; Leiferman, Jenn; Cacciatore, Joanne; Gold, Katherine J

    2018-01-01

    In the USA, stillbirth (in utero fetal death ≥20 weeks gestation) is a major public health issue. Women who experience stillbirth, compared to women with live birth, have a nearly sevenfold increased risk of a positive screen for post-traumatic stress disorder (PTSD) and a fourfold increased risk of depressive symptoms. Because the majority of women who have experienced the death of their baby become pregnant within 12-18 months and the lack of intervention studies conducted within this population, novel approaches targeting physical and mental health, specific to the needs of this population, are critical. Evidence suggests that yoga is efficacious, safe, acceptable, and cost-effective for improving mental health in a variety of populations, including pregnant and postpartum women. To date, there are no known studies examining online-streaming yoga as a strategy to help mothers cope with PTSD symptoms after stillbirth. The present study is a two-phase randomized controlled trial. Phase 1 will involve (1) an iterative design process to develop the online yoga prescription for phase 2 and (2) qualitative interviews to identify cultural barriers to recruitment in non-Caucasian women (i.e., predominately Hispanic and/or African American) who have experienced stillbirth ( N  = 5). Phase 2 is a three-group randomized feasibility trial with assessments at baseline, and at 12 and 20 weeks post-intervention. Ninety women who have experienced a stillbirth within 6 weeks to 24 months will be randomized into one of the following three arms for 12 weeks: (1) intervention low dose (LD) = 60 min/week online-streaming yoga ( n  = 30), (2) intervention moderate dose (MD) = 150 min/week online-streaming yoga ( n  = 30), or (3) stretch and tone control (STC) group = 60 min/week of stretching/toning exercises ( n  = 30). This study will explore the feasibility and acceptability of a 12-week, home-based, online-streamed yoga intervention, with varying doses

  5. Algorithm for counting large directed loops

    Energy Technology Data Exchange (ETDEWEB)

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  6. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    Science.gov (United States)

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  7. MULTI-OBJECTIVE ONLINE OPTIMIZATION OF BEAM LIFETIME AT APS

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yipeng

    2017-06-25

    In this paper, online optimization of beam lifetime at the APS (Advanced Photon Source) storage ring is presented. A general genetic algorithm (GA) is developed and employed for some online optimizations in the APS storage ring. Sextupole magnets in 40 sectors of the APS storage ring are employed as variables for the online nonlinear beam dynamics optimization. The algorithm employs several optimization objectives and is designed to run with topup mode or beam current decay mode. Up to 50\\% improvement of beam lifetime is demonstrated, without affecting the transverse beam sizes and other relevant parameters. In some cases, the top-up injection efficiency is also improved.

  8. Relationship between clustering and algorithmic phase transitions in the random k-XORSAT model and its NP-complete extensions

    International Nuclear Information System (INIS)

    Altarelli, F; Monasson, R; Zamponi, F

    2008-01-01

    We study the performances of stochastic heuristic search algorithms on Uniquely Extendible Constraint Satisfaction Problems with random inputs. We show that, for any heuristic preserving the Poissonian nature of the underlying instance, the (heuristic-dependent) largest ratio α a of constraints per variables for which a search algorithm is likely to find solutions is smaller than the critical ratio α d above which solutions are clustered and highly correlated. In addition we show that the clustering ratio can be reached when the number k of variables per constraints goes to infinity by the so-called Generalized Unit Clause heuristic

  9. Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay.

    Science.gov (United States)

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-01-01

    An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS). The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz. the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared. The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Distributed Constrained Stochastic Subgradient Algorithms Based on Random Projection and Asynchronous Broadcast over Networks

    Directory of Open Access Journals (Sweden)

    Junlong Zhu

    2017-01-01

    Full Text Available We consider a distributed constrained optimization problem over a time-varying network, where each agent only knows its own cost functions and its constraint set. However, the local constraint set may not be known in advance or consists of huge number of components in some applications. To deal with such cases, we propose a distributed stochastic subgradient algorithm over time-varying networks, where the estimate of each agent projects onto its constraint set by using random projection technique and the implement of information exchange between agents by employing asynchronous broadcast communication protocol. We show that our proposed algorithm is convergent with probability 1 by choosing suitable learning rate. For constant learning rate, we obtain an error bound, which is defined as the expected distance between the estimates of agent and the optimal solution. We also establish an asymptotic upper bound between the global objective function value at the average of the estimates and the optimal value.

  11. Efficacy and causal mechanism of an online social media intervention to increase physical activity: Results of a randomized controlled trial.

    Science.gov (United States)

    Zhang, Jingwen; Brackbill, Devon; Yang, Sijia; Centola, Damon

    2015-01-01

    To identify what features of social media - promotional messaging or peer networks - can increase physical activity. A 13-week social media-based exercise program was conducted at a large Northeastern university in Philadelphia, PA. In a randomized controlled trial, 217 graduate students from the University were randomized to three conditions: a control condition with a basic online program for enrolling in weekly exercise classes led by instructors of the University for 13 weeks, a media condition that supplemented the basic program with weekly online promotional media messages that encourage physical activity, and a social condition that replaced the media content with an online network of four to six anonymous peers composed of other participants of the program, in which each participant was able to see their peers' progress in enrolling in classes. The primary outcome was the number of enrollments in exercise classes, and the secondary outcomes were self-reported physical activities. Data were collected in 2014. Participants enrolled in 5.5 classes on average. Compared with enrollment in the control condition (mean = 4.5), promotional messages moderately increased enrollment (mean = 5.7, p = 0.08), while anonymous social networks significantly increased enrollment (mean = 6.3, p = 0.02). By the end of the program, participants in the social condition reported exercising moderately for an additional 1.6 days each week compared with the baseline, which was significantly more than an additional 0.8 days in the control condition. Social influence from anonymous online peers was more successful than promotional messages for improving physical activity. ClinicalTrials.gov: NCT02267369.

  12. On-line thermal margin estimation of a PWR core using a neural network approach

    International Nuclear Information System (INIS)

    Park, Soon Ok; Kim, Hyun Koon; Lee, Seung Hynk; Chang, Soon Heung

    1992-01-01

    A new approach for on-line thermal margin monitoring of a PWR Core is proposed in this paper, where a neural network model is introduced to predict the DNBR values at the given reactor operating conditions. The neural network is learned by the Back Propagation algorithm with the optimized random training data and is tested to investigate the generalized performance for the steady state operating region as well as for the transient situations where DNB is of the primary concern. The test results show that the high level of accuracy in predicting the DNBR can be achieved by the neural network model compared to the detailed code results. An insight has been gained from this study that the neural network model for estimating DNB performance can be a viable tool for on-line thermal margin monitoring of a nuclear power plant

  13. Random linear codes in steganography

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  14. Robust online Hamiltonian learning

    International Nuclear Information System (INIS)

    Granade, Christopher E; Ferrie, Christopher; Wiebe, Nathan; Cory, D G

    2012-01-01

    In this work we combine two distinct machine learning methodologies, sequential Monte Carlo and Bayesian experimental design, and apply them to the problem of inferring the dynamical parameters of a quantum system. We design the algorithm with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online (during experimental data collection), avoiding the need for storage and post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. The algorithm also numerically estimates the Cramer–Rao lower bound, certifying its own performance. (paper)

  15. GPU-based online track reconstruction for PANDA and application to the analysis of D→Kππ

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas

    2015-07-02

    The PANDA experiment is a new hadron physics experiment which is being built for the FAIR facility in Darmstadt, Germany. PANDA will employ a novel scheme of data acquisition: the experiment will reconstruct the full stream of events in realtime to make trigger decisions based on the event topology. An important part of this online event reconstruction is online track reconstruction. Online track reconstruction algorithms need to reconstruct particle trajectories in nearly realtime. This work uses the high-throughput devices of Graphics Processing Units to benchmark different online track reconstruction algorithms. The reconstruction of D{sup ±}→K{sup -+}π{sup ±}π{sup ±} is studied extensively and one online track reconstruction algorithm applied.

  16. An Artificial Bee Colony Algorithm for the Job Shop Scheduling Problem with Random Processing Times

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2011-09-01

    Full Text Available Due to the influence of unpredictable random events, the processing time of each operation should be treated as random variables if we aim at a robust production schedule. However, compared with the extensive research on the deterministic model, the stochastic job shop scheduling problem (SJSSP has not received sufficient attention. In this paper, we propose an artificial bee colony (ABC algorithm for SJSSP with the objective of minimizing the maximum lateness (which is an index of service quality. First, we propose a performance estimate for preliminary screening of the candidate solutions. Then, the K-armed bandit model is utilized for reducing the computational burden in the exact evaluation (through Monte Carlo simulation process. Finally, the computational results on different-scale test problems validate the effectiveness and efficiency of the proposed approach.

  17. A quick survey of text categorization algorithms

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2007-12-01

    Full Text Available This paper contains an overview of basic formulations and approaches to text classification. This paper surveys the algorithms used in text categorization: handcrafted rules, decision trees, decision rules, on-line learning, linear classifier, Rocchio’s algorithm, k Nearest Neighbor (kNN, Support Vector Machines (SVM.

  18. A HYBRID HEURISTIC ALGORITHM FOR SOLVING THE RESOURCE CONSTRAINED PROJECT SCHEDULING PROBLEM (RCPSP

    Directory of Open Access Journals (Sweden)

    Juan Carlos Rivera

    Full Text Available The Resource Constrained Project Scheduling Problem (RCPSP is a problem of great interest for the scientific community because it belongs to the class of NP-Hard problems and no methods are known that can solve it accurately in polynomial processing times. For this reason heuristic methods are used to solve it in an efficient way though there is no guarantee that an optimal solution can be obtained. This research presents a hybrid heuristic search algorithm to solve the RCPSP efficiently, combining elements of the heuristic Greedy Randomized Adaptive Search Procedure (GRASP, Scatter Search and Justification. The efficiency obtained is measured taking into account the presence of the new elements added to the GRASP algorithm taken as base: Justification and Scatter Search. The algorithms are evaluated using three data bases of instances of the problem: 480 instances of 30 activities, 480 of 60, and 600 of 120 activities respectively, taken from the library PSPLIB available online. The solutions obtained by the developed algorithm for the instances of 30, 60 and 120 are compared with results obtained by other researchers at international level, where a prominent place is obtained, according to Chen (2011.

  19. Prostate cancer prediction using the random forest algorithm that takes into account transrectal ultrasound findings, age, and serum levels of prostate-specific antigen

    Directory of Open Access Journals (Sweden)

    Li-Hong Xiao

    2017-01-01

    Full Text Available The aim of this study is to evaluate the ability of the random forest algorithm that combines data on transrectal ultrasound findings, age, and serum levels of prostate-specific antigen to predict prostate carcinoma. Clinico-demographic data were analyzed for 941 patients with prostate diseases treated at our hospital, including age, serum prostate-specific antigen levels, transrectal ultrasound findings, and pathology diagnosis based on ultrasound-guided needle biopsy of the prostate. These data were compared between patients with and without prostate cancer using the Chi-square test, and then entered into the random forest model to predict diagnosis. Patients with and without prostate cancer differed significantly in age and serum prostate-specific antigen levels (P < 0.001, as well as in all transrectal ultrasound characteristics (P < 0.05 except uneven echo (P = 0.609. The random forest model based on age, prostate-specific antigen and ultrasound predicted prostate cancer with an accuracy of 83.10%, sensitivity of 65.64%, and specificity of 93.83%. Positive predictive value was 86.72%, and negative predictive value was 81.64%. By integrating age, prostate-specific antigen levels and transrectal ultrasound findings, the random forest algorithm shows better diagnostic performance for prostate cancer than either diagnostic indicator on its own. This algorithm may help improve diagnosis of the disease by identifying patients at high risk for biopsy.

  20. Prostate cancer prediction using the random forest algorithm that takes into account transrectal ultrasound findings, age, and serum levels of prostate-specific antigen.

    Science.gov (United States)

    Xiao, Li-Hong; Chen, Pei-Ran; Gou, Zhong-Ping; Li, Yong-Zhong; Li, Mei; Xiang, Liang-Cheng; Feng, Ping

    2017-01-01

    The aim of this study is to evaluate the ability of the random forest algorithm that combines data on transrectal ultrasound findings, age, and serum levels of prostate-specific antigen to predict prostate carcinoma. Clinico-demographic data were analyzed for 941 patients with prostate diseases treated at our hospital, including age, serum prostate-specific antigen levels, transrectal ultrasound findings, and pathology diagnosis based on ultrasound-guided needle biopsy of the prostate. These data were compared between patients with and without prostate cancer using the Chi-square test, and then entered into the random forest model to predict diagnosis. Patients with and without prostate cancer differed significantly in age and serum prostate-specific antigen levels (P prostate-specific antigen and ultrasound predicted prostate cancer with an accuracy of 83.10%, sensitivity of 65.64%, and specificity of 93.83%. Positive predictive value was 86.72%, and negative predictive value was 81.64%. By integrating age, prostate-specific antigen levels and transrectal ultrasound findings, the random forest algorithm shows better diagnostic performance for prostate cancer than either diagnostic indicator on its own. This algorithm may help improve diagnosis of the disease by identifying patients at high risk for biopsy.

  1. A nuclear reload optimization approach using a real coded genetic algorithm with random keys

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C.

    2009-01-01

    The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison

  2. Information filtering in sparse online systems: recommendation via semi-local diffusion.

    Science.gov (United States)

    Zeng, Wei; Zeng, An; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2013-01-01

    With the rapid growth of the Internet and overwhelming amount of information and choices that people are confronted with, recommender systems have been developed to effectively support users' decision-making process in the online systems. However, many recommendation algorithms suffer from the data sparsity problem, i.e. the user-object bipartite networks are so sparse that algorithms cannot accurately recommend objects for users. This data sparsity problem makes many well-known recommendation algorithms perform poorly. To solve the problem, we propose a recommendation algorithm based on the semi-local diffusion process on the user-object bipartite network. The simulation results on two sparse datasets, Amazon and Bookcross, show that our method significantly outperforms the state-of-the-art methods especially for those small-degree users. Two personalized semi-local diffusion methods are proposed which further improve the recommendation accuracy. Finally, our work indicates that sparse online systems are essentially different from the dense online systems, so it is necessary to reexamine former algorithms and conclusions based on dense data in sparse systems.

  3. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  4. Benchmarking online dispatch algorithms for Emergency Medical Services

    NARCIS (Netherlands)

    Jagtenberg, C.J.; Berg, P.L.; van der Mei, R.D.

    2016-01-01

    Providers of Emergency Medical Services (EMS) face the online ambulance dispatch problem, in which they decide which ambulance to send to an incoming incident. Their objective is to minimize the fraction of arrivals later than a target time. Today, the gap between existing solutions and the optimum

  5. Competitive Analysis of the Online Inventory Problem

    DEFF Research Database (Denmark)

    Larsen, Kim Skak; Wøhlk, Sanne

    2010-01-01

    We consider a real-time version of the inventory problem with deterministic demand in which decisions as to when to replenish and how much to buy must be made in an online fashion without knowledge of future prices. We suggest online algorithms for each of four models for the problem and use...

  6. Online evolution for multi-action adversarial games

    DEFF Research Database (Denmark)

    Justesen, Niels Orsleff; Mahlmann, Tobias; Togelius, Julian

    2016-01-01

    the combination of atomic actions that make up a single move, with a state evaluation function used for fitness. We implement Online Evolution for the turn-based multi-action game Hero Academy and compare it with a standard Monte Carlo Tree Search implementation as well as two types of greedy algorithms. Online...

  7. An online randomized controlled trial, with or without problem-solving treatment, for long-term cancer survivors after hematopoietic cell transplantation.

    Science.gov (United States)

    Syrjala, Karen L; Yi, Jean C; Artherholt, Samantha B; Romano, Joan M; Crouch, Marie-Laure; Fiscalini, Allison S; Hegel, Mark T; Flowers, Mary E D; Martin, Paul J; Leisenring, Wendy M

    2018-05-05

    This randomized controlled trial examines the efficacy of INSPIRE, an INternet-based Survivorship Program with Information and REsources, with or without problem-solving treatment (PST) telehealth calls, for survivors after hematopoietic cell transplantation (HCT). All adult survivors who met eligibility criteria were approached for consent. Participants completed patient-reported outcomes at baseline and 6 months. Those with baseline impaired scores on one or more of the outcomes were randomized to INSPIRE, INSPIRE + PST, or control with delayed INSPIRE access. Outcomes included Cancer and Treatment Distress, Symptom Checklist-90-R Depression, and Fatigue Symptom Inventory. Planned analyses compared arms for mean change in aggregated impaired outcomes and for proportion of participants improved on each outcome. Of 1306 eligible HCT recipients, 755 (58%) participated, and 344 (45%) had one or more impaired scores at baseline. We found no reduction in aggregated outcomes for either intervention (P > 0.3). In analyses of individual outcomes, participants randomized to INSPIRE + PST were more likely to improve in distress than controls (45 vs. 20%, RR 2.3, CI 1.0, 5.1); those randomized to INSPIRE alone were marginally more likely to improve in distress (40 vs. 20%, RR 2.0, CI 0.9, 4.5). The INSPIRE online intervention demonstrated a marginal benefit for distress that improved with the addition of telehealth PST, particularly for those who viewed the website or were age 40 or older. Online and telehealth programs such as INSPIRE offer opportunities to enhance HCT survivorship outcomes, particularly for mood, though methods would benefit from strategies to improve efficacy.

  8. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    Science.gov (United States)

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  9. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  10. Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking into Account Users' Views, Online Context and Algorithmic Estimation.

    Science.gov (United States)

    Williams, Matthew L; Burnap, Pete; Sloan, Luke

    2017-12-01

    New and emerging forms of data, including posts harvested from social media sites such as Twitter, have become part of the sociologist's data diet. In particular, some researchers see an advantage in the perceived 'public' nature of Twitter posts, representing them in publications without seeking informed consent. While such practice may not be at odds with Twitter's terms of service, we argue there is a need to interpret these through the lens of social science research methods that imply a more reflexive ethical approach than provided in 'legal' accounts of the permissible use of these data in research publications. To challenge some existing practice in Twitter-based research, this article brings to the fore: (1) views of Twitter users through analysis of online survey data; (2) the effect of context collapse and online disinhibition on the behaviours of users; and (3) the publication of identifiable sensitive classifications derived from algorithms.

  11. Does Self-Selection Affect Samples’ Representativeness in Online Surveys? An Investigation in Online Video Game Research

    Science.gov (United States)

    van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-01-01

    Background The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. PMID:25001007

  12. Stability and chaos of LMSER PCA learning algorithm

    International Nuclear Information System (INIS)

    Lv Jiancheng; Y, Zhang

    2007-01-01

    LMSER PCA algorithm is a principal components analysis algorithm. It is used to extract principal components on-line from input data. The algorithm has both stability and chaotic dynamic behavior under some conditions. This paper studies the local stability of the LMSER PCA algorithm via a corresponding deterministic discrete time system. Conditions for local stability are derived. The paper also explores the chaotic behavior of this algorithm. It shows that the LMSER PCA algorithm can produce chaos. Waveform plots, Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior of this algorithm

  13. An Unsupervised Online Spike-Sorting Framework.

    Science.gov (United States)

    Knieling, Simeon; Sridharan, Kousik S; Belardinelli, Paolo; Naros, Georgios; Weiss, Daniel; Mormann, Florian; Gharabaghi, Alireza

    2016-08-01

    Extracellular neuronal microelectrode recordings can include action potentials from multiple neurons. To separate spikes from different neurons, they can be sorted according to their shape, a procedure referred to as spike-sorting. Several algorithms have been reported to solve this task. However, when clustering outcomes are unsatisfactory, most of them are difficult to adjust to achieve the desired results. We present an online spike-sorting framework that uses feature normalization and weighting to maximize the distinctiveness between different spike shapes. Furthermore, multiple criteria are applied to either facilitate or prevent cluster fusion, thereby enabling experimenters to fine-tune the sorting process. We compare our method to established unsupervised offline (Wave_Clus (WC)) and online (OSort (OS)) algorithms by examining their performance in sorting various test datasets using two different scoring systems (AMI and the Adamos metric). Furthermore, we evaluate sorting capabilities on intra-operative recordings using established quality metrics. Compared to WC and OS, our algorithm achieved comparable or higher scores on average and produced more convincing sorting results for intra-operative datasets. Thus, the presented framework is suitable for both online and offline analysis and could substantially improve the quality of microelectrode-based data evaluation for research and clinical application.

  14. A method for online verification of adapted fields using an independent dose monitor

    International Nuclear Information System (INIS)

    Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert

    2013-01-01

    Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields

  15. Orientation estimation algorithm applied to high-spin projectiles

    International Nuclear Information System (INIS)

    Long, D F; Lin, J; Zhang, X M; Li, J

    2014-01-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm. (paper)

  16. Orientation estimation algorithm applied to high-spin projectiles

    Science.gov (United States)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  17. Online Evolution for Multi-Action Adversarial Games

    OpenAIRE

    Justesen, Niels; Mahlmann, Tobias; Togelius, Julian

    2016-01-01

    We present Online Evolution, a novel method for playing turn-based multi-action adversarial games. Such games, which include most strategy games, have extremely high branching factors due to each turn having multiple actions. In Online Evolution, an evolutionary algorithm is used to evolve the combination of atomic actions that make up a single move, with a state evaluation function used for fitness. We implement Online Evolution for the turn-based multi-action game Hero Academy and compare i...

  18. Efficacy and causal mechanism of an online social media intervention to increase physical activity: Results of a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Jingwen Zhang

    2015-01-01

    Full Text Available Objective: To identify what features of social media – promotional messaging or peer networks – can increase physical activity. Method: A 13-week social media-based exercise program was conducted at a large Northeastern university in Philadelphia, PA. In a randomized controlled trial, 217 graduate students from the University were randomized to three conditions: a control condition with a basic online program for enrolling in weekly exercise classes led by instructors of the University for 13 weeks, a media condition that supplemented the basic program with weekly online promotional media messages that encourage physical activity, and a social condition that replaced the media content with an online network of four to six anonymous peers composed of other participants of the program, in which each participant was able to see their peers' progress in enrolling in classes. The primary outcome was the number of enrollments in exercise classes, and the secondary outcomes were self-reported physical activities. Data were collected in 2014. Results: Participants enrolled in 5.5 classes on average. Compared with enrollment in the control condition (mean = 4.5, promotional messages moderately increased enrollment (mean = 5.7, p = 0.08, while anonymous social networks significantly increased enrollment (mean = 6.3, p = 0.02. By the end of the program, participants in the social condition reported exercising moderately for an additional 1.6 days each week compared with the baseline, which was significantly more than an additional 0.8 days in the control condition. Conclusion: Social influence from anonymous online peers was more successful than promotional messages for improving physical activity. Clinical Trial Registration: ClinicalTrials.gov: NCT02267369.

  19. Online Personalization of Hearing Instruments

    Directory of Open Access Journals (Sweden)

    Bert de Vries

    2008-09-01

    Full Text Available Online personalization of hearing instruments refers to learning preferred tuning parameter values from user feedback through a control wheel (or remote control, during normal operation of the hearing aid. We perform hearing aid parameter steering by applying a linear map from acoustic features to tuning parameters. We formulate personalization of the steering parameters as the maximization of an expected utility function. A sparse Bayesian approach is then investigated for its suitability to find efficient feature representations. The feasibility of our approach is demonstrated in an application to online personalization of a noise reduction algorithm. A patient trial indicates that the acoustic features chosen for learning noise control are meaningful, that environmental steering of noise reduction makes sense, and that our personalization algorithm learns proper values for tuning parameters.

  20. Evaluation of an Online Campaign for Promoting Help-Seeking Attitudes for Depression Using a Facebook Advertisement: An Online Randomized Controlled Experiment.

    Science.gov (United States)

    Hui, Alison; Wong, Paul Wai-Ching; Fu, King-Wa

    2015-01-01

    A depression-awareness campaign delivered through the Internet has been recommended as a public health approach that would enhance mental health literacy and encourage help-seeking attitudes. However, the outcomes of such a campaign remain understudied. The main aim of this study was to evaluate the effectiveness of an online depression awareness campaign, which was informed by the theory of planned behavior, to encourage help-seeking attitudes for depression and to enhance mental health literacy in Hong Kong. The second aim was to examine click-through behaviors by varying the affective facial expressions of people in the Facebook advertisements. Potential participants were recruited through Facebook advertisements, using either a happy or sad face illustration. Volunteer participants registered for the study by clicking on the advertisement and were invited to leave their personal email addresses to receive educational content about depression. The participants were randomly assigned into two groups (campaign or control), and over a four consecutive week period, received either the campaign material or official information developed by the Hospital Authority in Hong Kong. Pretests and posttests were conducted before and after the campaign to measure the differences in help-seeking attitudes and mental health literacy among the campaign and control groups. Of the 199 participants that registered and completed the pretest, 116 (55 campaign and 62 control) completed the campaign and the posttest. At the posttest, we found no significant changes in help-seeking attitudes between the campaign and control groups, but the campaign group participants demonstrated a statistically significant improvement in mental health literacy (P=.031) and a higher willingness to access additional information (Padvertisement attracted more click-throughs by users into the website than did the sad face advertisement (P=.03). The present study provides evidence that an online campaign can

  1. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    International Nuclear Information System (INIS)

    Yao, W; Farr, J

    2015-01-01

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations

  2. Metadiscourse Markers of Online Texts: English and Persian Online Headlines Use of Metadiscourse Markers

    Science.gov (United States)

    Yazdani, Akram; Salehi, Hadi

    2016-01-01

    The aim of the present study was to illuminate the differences between Persian and English in online headlines in terms of applying metadiscourse markers in the first two months of the year 2015. To fulfill this purpose, 100 Persian and English online headlines (each 50 headlines) were chosen randomly from English and Persian newscasts such as…

  3. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  4. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  5. A pilot randomized controlled trial of on-line interventions to improve sleep quality in adults after mild or moderate traumatic brain injury.

    Science.gov (United States)

    Theadom, Alice; Barker-Collo, Suzanne; Jones, Kelly; Dudley, Margaret; Vincent, Norah; Feigin, Valery

    2018-05-01

    To explore feasibility and potential efficacy of on-line interventions for sleep quality following a traumatic brain injury (TBI). A two parallel-group, randomized controlled pilot study. Community-based. In all, 24 participants (mean age: 35.9 ± 11.8 years) who reported experiencing sleep difficulties between 3 and 36 months after a mild or moderate TBI. Participants were randomized to receive either a cognitive behaviour therapy or an education intervention on-line. Both interventions were self-completed for 20-30 minutes per week over a six-week period. The Pittsburgh Sleep Quality Index assessed self-reported sleep quality with actigraphy used as an objective measure of sleep quality. The CNS Vital Signs on-line neuropsychological test assessed cognitive functioning and the Rivermead Post-concussion Symptoms and Quality of Life after Brain Injury questionnaires were completed pre and post intervention. Both programmes demonstrated feasibility for use post TBI, with 83.3% of participants completing the interventions. The cognitive behaviour therapy group experienced significant reductions ( F = 5.47, p = 0.04) in sleep disturbance (mean individual change = -4.00) in comparison to controls post intervention (mean individual change = -1.50) with a moderate effect size of 1.17. There were no significant group differences on objective sleep quality, cognitive functioning, post-concussion symptoms or quality of life. On-line programmes designed to improve sleep are feasible for use for adults following mild-to-moderate TBI. Based on the effect size identified in this pilot study, 128 people (64 per group) would be needed to determine clinical effectiveness.

  6. Detection of boiling by Piety's on-line PSD-pattern recognition algorithm applied to neutron noise signals in the SAPHIR reactor

    International Nuclear Information System (INIS)

    Spiekerman, G.

    1988-09-01

    A partial blockage of the cooling channels of a fuel element in a swimming pool reactor could lead to vapour generation and to burn-out. To detect such anomalies, a pattern recognition algorithm based on power spectra density (PSD) proposed by Piety was further developed and implemented on a PDP 11/23 for on-line applications. This algorithm identifies anomalies by measuring the PSD on the process signal and comparing them with a standard baseline previously formed. Up to 8 decision discriminants help to recognize spectral changes due to anomalies. In our application, to detect boiling as quickly as possible with sufficient sensitivity, Piety's algorithm was modified using overlapped Fast-Fourier-Transform-Processing and the averaging of the PSDs over a large sample of preceding instantaneous PSDs. This processing allows high sensitivity in detecting weak disturbances without reducing response time. The algorithm was tested with simulation-of-boiling experiments where nitrogen in a cooling channel of a mock-up of a fuel element was injected. Void fractions higher than 30 % in the channel can be detected. In the case of boiling, it is believed that this limit is lower because collapsing bubbles could give rise to stronger fluctuations. The algorithm was also tested with a boiling experiment where the reactor coolant flow was actually reduced. The results showed that the discriminant D5 of Piety's algorithm based on neutron noise obtained from the existing neutron chambers of the reactor control system could sensitively recognize boiling. The detection time amounts to 7-30 s depending on the strength of the disturbances. Other events, which arise during a normal reactor run like scrams, removal of isotope elements without scramming or control rod movements and which could lead to false alarms, can be distinguished from boiling. 49 refs., 104 figs., 5 tabs

  7. On-line transient stability assessment of large-scale power systems by using ball vector machines

    International Nuclear Information System (INIS)

    Mohammadi, M.; Gharehpetian, G.B.

    2010-01-01

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  8. Algorithmic learning in a random world

    CERN Document Server

    Vovk, Vladimir; Shafer, Glenn

    2005-01-01

    A new scientific monograph developing significant new algorithmic foundations in machine learning theory. Researchers and postgraduates in CS, statistics, and A.I. will find the book an authoritative and formal presentation of some of the most promising theoretical developments in machine learning.

  9. Balancing exploration and exploitation in learning to rank online

    NARCIS (Netherlands)

    Hofmann, K.; Whiteson, S.; de Rijke, M.

    2011-01-01

    As retrieval systems become more complex, learning to rank approaches are being developed to automatically tune their parameters. Using online learning to rank approaches, retrieval systems can learn directly from implicit feedback, while they are running. In such an online setting, algorithms need

  10. Prediction of glycosylation sites using random forests

    Directory of Open Access Journals (Sweden)

    Hirst Jonathan D

    2008-11-01

    Full Text Available Abstract Background Post translational modifications (PTMs occur in the vast majority of proteins and are essential for function. Prediction of the sequence location of PTMs enhances the functional characterisation of proteins. Glycosylation is one type of PTM, and is implicated in protein folding, transport and function. Results We use the random forest algorithm and pairwise patterns to predict glycosylation sites. We identify pairwise patterns surrounding glycosylation sites and use an odds ratio to weight their propensity of association with modified residues. Our prediction program, GPP (glycosylation prediction program, predicts glycosylation sites with an accuracy of 90.8% for Ser sites, 92.0% for Thr sites and 92.8% for Asn sites. This is significantly better than current glycosylation predictors. We use the trepan algorithm to extract a set of comprehensible rules from GPP, which provide biological insight into all three major glycosylation types. Conclusion We have created an accurate predictor of glycosylation sites and used this to extract comprehensible rules about the glycosylation process. GPP is available online at http://comp.chem.nottingham.ac.uk/glyco/.

  11. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  12. Implementation of Naive Bayes Classifier Algorithm to Evaluation in Utilizing Online Hotel Tax Reporting Application

    Directory of Open Access Journals (Sweden)

    R. Dimas Adityo

    2017-10-01

    Full Text Available The current implementation of tax reporting regional Pasuruan hotels have used online (Web-based, with the aim of reporting systems can run effectively and efficiently in receiving the financial statements especially from taxpayer property. Pasuruan as one small town quite rapidly in East Java, have implemented role models online tax filing system starting in 2015, with the amount of 6 hotels, there are several classes of hotels ranging from the budget class up to class three stars. After the application of the system running for 18 months (2015-2016, from existing data, conducted research on the analysis of the level of compliance of taxpayers reporting incomes in a hotel. On the research was designed and built a system to evaluate the level of compliance with the performance from the taxpayer (WP in the 2nd year (2016 and are classified in categories (1 the taxpayer (WP very obedient (ST, (2 the taxpayer (WP is quite obedient (CT, (3 Taxpayers (WP less obedient (KT. Input data will be processed using the technique of data mining algorithms Naive Bayes Classifier (NBC to form the table of probability as a basis for the process of classification levels of taxpayer compliance. Based on the results of the measurement, the test results show with an accuracy of 50% i.e. 3 taxpayers is the very obedient (ST to pay taxes. Then from the classification, the study could be made of recommendation solutions to guide the taxpayer in reporting revenues well and true.

  13. A solution to energy and environmental problems of electric power system using hybrid harmony search-random search optimization algorithm

    Directory of Open Access Journals (Sweden)

    Vikram Kumar Kamboj

    2016-04-01

    Full Text Available In recent years, global warming and carbon dioxide (CO2 emission reduction have become important issues in India, as CO2 emission levels are continuing to rise in accordance with the increased volume of Indian national energy consumption under the pressure of global warming, it is crucial for Indian government to impose the effective policy to promote CO2 emission reduction. Challenge of supplying the nation with high quality and reliable electrical energy at a reasonable cost, converted government policy into deregulation and restructuring environment. This research paper presents aims to presents an effective solution for energy and environmental problems of electric power using an efficient and powerful hybrid optimization algorithm: Hybrid Harmony search-random search algorithm. The proposed algorithm is tested for standard IEEE-14 bus, -30 bus and -56 bus system. The effectiveness of proposed hybrid algorithm is compared with others well known evolutionary, heuristics and meta-heuristics search algorithms. For multi-objective unit commitment, it is found that as there are conflicting relationship between cost and emission, if the performance in cost criterion is improved, performance in the emission is seen to deteriorate.

  14. Deterministic algorithms for multi-criteria Max-TSP

    NARCIS (Netherlands)

    Manthey, Bodo

    2012-01-01

    We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  15. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  16. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    International Nuclear Information System (INIS)

    Zheng Bin; Meng Qingfeng; Wang Nan; Li Zhi

    2011-01-01

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  17. Translating a child care based intervention for online delivery: development and randomized pilot study of Go NAPSACC

    Directory of Open Access Journals (Sweden)

    Dianne S. Ward

    2017-11-01

    Full Text Available Abstract Background As part of childhood obesity prevention initiatives, Early Care and Education (ECE programs are being asked to implement evidence-based strategies that promote healthier eating and physical activity habits in children. Translation of evidence-based interventions into real world ECE settings often encounter barriers, including time constraints, lack of easy-to-use tools, and inflexible intervention content. This study describes translation of an evidence-based program (NAPSACC into an online format (Go NAPSACC and a randomized pilot study evaluating its impact on centers’ nutrition environments. Methods Go NAPSACC retained core elements and implementation strategies from the original program, but translated tools into an online, self-directed format using extensive input from the ECE community. For the pilot, local technical assistance (TA agencies facilitated recruitment of 33 centers, which were randomized to immediate (intervention, n = 18 or delayed (control, n = 15 access groups. Center directors were oriented on Go NAPSACC tools by their local TA providers (after being trained by researchers, after which they implemented Go NAPSACC independently with minimal TA support. The Environment and Policy Assessment and Observation instrument (self-report, collected prior to and following the 4-month intervention period, was used to assess impact on centers’ nutrition environments. Process data were also collected from a sample of directors and all TA providers to evaluate program usability and implementation. Results Demographic characteristics of intervention and control centers were similar. Two centers did not complete follow-up measures, leaving 17 intervention and 14 control centers in the analytic sample. Between baseline and follow-up, intervention centers improved overall nutrition scores (Cohen’s d effect size = 0.73, p = 0.15, as well as scores for foods (effect size = 0.74, p = 0

  18. Algorithms for the ROD DSP of the ATLAS Hadronic Tile Calorimeter

    International Nuclear Information System (INIS)

    Salvachua, B; Abdallah, J; Castelo, J; Castillo, V; Cuenca, C; Ferrer, A; Fullana, E; Gonzalez, V; Higon, E; Munar, A; Poveda, J; Ruiz-Martinez, A; Sanchis, E; Solans, C; Soret, J; Torres, J; Valero, A; Valls, J A

    2007-01-01

    In this paper we present the performance of two algorithms currently running in the Tile Calorimeter Read-Out Driver boards for the commissioning of ATLAS. The first algorithm presented is the so called Optimal Filtering. It reconstructs the deposited energy in the Tile Calorimeter and the arrival time of the data. The second algorithm is the MTag which tags low transverse momentum muons that may escape the ATLAS muon spectrometer first level trigger. Comparisons between online (inside the Read-Out Drivers) and offline implementations are done with an agreement around 99% for the reconstruction of the amplitude using the Optimal Filtering algorithm and a coincidende of 93% between the offline and online tagged muons for the MTag algorithm. The processing time is measured for both algorithms running together with a resulting time of 59.2 μs which, although above the 10 μs of the first level trigger, it fulfills the requirements of the commissioning trigger ( ∼ 1 Hz). We expect further optimizations of the algorithms which will reduce their processing time below 10 μs

  19. Online Tools for Uncovering Data Quality (DQ) Issues in Satellite-Based Global Precipitation Products

    Science.gov (United States)

    Liu, Zhong; Heo, Gil

    2015-01-01

    Data quality (DQ) has many attributes or facets (i.e., errors, biases, systematic differences, uncertainties, benchmark, false trends, false alarm ratio, etc.)Sources can be complicated (measurements, environmental conditions, surface types, algorithms, etc.) and difficult to be identified especially for multi-sensor and multi-satellite products with bias correction (TMPA, IMERG, etc.) How to obtain DQ info fast and easily, especially quantified info in ROI Existing parameters (random error), literature, DIY, etc.How to apply the knowledge in research and applications.Here, we focus on online systems for integration of products and parameters, visualization and analysis as well as investigation and extraction of DQ information.

  20. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  1. Adaptive control of nonlinear system using online error minimum neural networks.

    Science.gov (United States)

    Jia, Chao; Li, Xiaoli; Wang, Kang; Ding, Dawei

    2016-11-01

    In this paper, a new learning algorithm named OEM-ELM (Online Error Minimized-ELM) is proposed based on ELM (Extreme Learning Machine) neural network algorithm and the spreading of its main structure. The core idea of this OEM-ELM algorithm is: online learning, evaluation of network performance, and increasing of the number of hidden nodes. It combines the advantages of OS-ELM and EM-ELM, which can improve the capability of identification and avoid the redundancy of networks. The adaptive control based on the proposed algorithm OEM-ELM is set up which has stronger adaptive capability to the change of environment. The adaptive control of chemical process Continuous Stirred Tank Reactor (CSTR) is also given for application. The simulation results show that the proposed algorithm with respect to the traditional ELM algorithm can avoid network redundancy and improve the control performance greatly. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Evaluation of a transdiagnostic psychodynamic online intervention to support return to work: A randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Rüdiger Zwerenz

    Full Text Available Given their flexibility, online interventions may be useful as an outpatient treatment option to support vocational reintegration after inpatient rehabilitation. To that purpose we devised a transdiagnostic psychodynamic online intervention to facilitate return to work, focusing on interpersonal conflicts at the workplace often responsible for work-related stress.In a randomized controlled trial, we included employed patients from cardiologic, psychosomatic and orthopedic rehabilitation with work-related stress or need for support at intake to inpatient rehabilitation after they had given written consent to take part in the study. Following discharge, maladaptive interpersonal interactions at the workplace were identified via weekly blogs and processed by written therapeutic comments over 12 weeks in the intervention group (IG. The control group (CG received an augmented treatment as usual condition. The main outcome, subjective prognosis of gainful employment (SPE, and secondary outcomes (psychological complaints were assessed by means of online questionnaires before, at the end of aftercare (3 months and at follow-up (12 months. We used ITT analyses controlling for baseline scores and medical group.N = 319 patients were enrolled into IG and N = 345 into CG. 77% of the IG logged in to the webpage (CG 74% and 65% of the IG wrote blogs. Compared to the CG, the IG reported a significantly more positive SPE at follow-up. Measures of depression, anxiety and psychosocial stressors decreased from baseline to follow-up, whereas the corresponding scores increased in the CG. Correspondingly, somatization and psychological quality of life improved in the IG.Psychodynamic online aftercare was effective to enhance subjective prognosis of future employment and improved psychological complaints across a variety of chronic physical and psychological conditions, albeit with small effect sizes.

  3. A randomized trial of teen online problem solving: efficacy in improving caregiver outcomes after brain injury.

    Science.gov (United States)

    Wade, Shari L; Walz, Nicolay C; Carey, JoAnne; McMullen, Kendra M; Cass, Jennifer; Mark, Erin; Yeates, Keith Owen

    2012-11-01

    To examine the results of a randomized clinical trial (RCT) of Teen Online Problem Solving (TOPS), an online problem solving therapy model, in increasing problem-solving skills and decreasing depressive symptoms and global distress for caregivers of adolescents with traumatic brain injury (TBI). Families of adolescents aged 11-18 who sustained a moderate to severe TBI between 3 and 19 months earlier were recruited from hospital trauma registries. Participants were assigned to receive a web-based, problem-solving intervention (TOPS, n = 20), or access to online resources pertaining to TBI (Internet Resource Comparison; IRC; n = 21). Parent report of problem solving skills, depressive symptoms, global distress, utilization, and satisfaction were assessed pre- and posttreatment. Groups were compared on follow-up scores after controlling for pretreatment levels. Family income was examined as a potential moderator of treatment efficacy. Improvement in problem solving was examined as a mediator of reductions in depression and distress. Forty-one participants provided consent and completed baseline assessments, with follow-up assessments completed on 35 participants (16 TOPS and 19 IRC). Parents in both groups reported a high level of satisfaction with both interventions. Improvements in problem solving skills and depression were moderated by family income, with caregivers of lower income in TOPS reporting greater improvements. Increases in problem solving partially mediated reductions in global distress. Findings suggest that TOPS may be effective in improving problem solving skills and reducing depressive symptoms for certain subsets of caregivers in families of adolescents with TBI.

  4. GeneYenta: a phenotype-based rare disease case matching tool based on online dating algorithms for the acceleration of exome interpretation.

    Science.gov (United States)

    Gottlieb, Michael M; Arenillas, David J; Maithripala, Savanie; Maurer, Zachary D; Tarailo Graovac, Maja; Armstrong, Linlea; Patel, Millan; van Karnebeek, Clara; Wasserman, Wyeth W

    2015-04-01

    Advances in next-generation sequencing (NGS) technologies have helped reveal causal variants for genetic diseases. In order to establish causality, it is often necessary to compare genomes of unrelated individuals with similar disease phenotypes to identify common disrupted genes. When working with cases of rare genetic disorders, finding similar individuals can be extremely difficult. We introduce a web tool, GeneYenta, which facilitates the matchmaking process, allowing clinicians to coordinate detailed comparisons for phenotypically similar cases. Importantly, the system is focused on phenotype annotation, with explicit limitations on highly confidential data that create barriers to participation. The procedure for matching of patient phenotypes, inspired by online dating services, uses an ontology-based semantic case matching algorithm with attribute weighting. We evaluate the capacity of the system using a curated reference data set and 19 clinician entered cases comparing four matching algorithms. We find that the inclusion of clinician weights can augment phenotype matching. © 2015 WILEY PERIODICALS, INC.

  5. Classification of Phishing Email Using Random Forest Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Andronicus A. Akinyelu

    2014-01-01

    Full Text Available Phishing is one of the major challenges faced by the world of e-commerce today. Thanks to phishing attacks, billions of dollars have been lost by many companies and individuals. In 2012, an online report put the loss due to phishing attack at about $1.5 billion. This global impact of phishing attacks will continue to be on the increase and thus requires more efficient phishing detection techniques to curb the menace. This paper investigates and reports the use of random forest machine learning algorithm in classification of phishing attacks, with the major objective of developing an improved phishing email classifier with better prediction accuracy and fewer numbers of features. From a dataset consisting of 2000 phishing and ham emails, a set of prominent phishing email features (identified from the literature were extracted and used by the machine learning algorithm with a resulting classification accuracy of 99.7% and low false negative (FN and false positive (FP rates.

  6. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    Science.gov (United States)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  7. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  8. Use of online recruitment strategies in a randomized trial of cancer survivors.

    Science.gov (United States)

    Juraschek, Stephen P; Plante, Timothy B; Charleston, Jeanne; Miller, Edgar R; Yeh, Hsin-Chieh; Appel, Lawrence J; Jerome, Gerald J; Gayles, Debra; Durkin, Nowella; White, Karen; Dalcin, Arlene; Hermosilla, Manuel

    2018-04-01

    Despite widespread Internet adoption, online advertising remains an underutilized tool to recruit participants into clinical trials. Whether online advertising is a cost-effective method to enroll participants compared to other traditional forms of recruitment is not known. Recruitment for the Survivorship Promotion In Reducing IGF-1 Trial, a community-based study of cancer survivors, was conducted from June 2015 through December 2016 via in-person community fairs, advertisements in periodicals, and direct postal mailings. In addition, "Right Column" banner ads were purchased from Facebook to direct participants to the Survivorship Promotion In Reducing IGF-1 Trial website. Response rates, costs of traditional and online advertisements, and demographic data were determined and compared across different online and traditional recruitment strategies. Micro-trials optimizing features of online advertisements were also explored. Of the 406 respondents to our overall outreach efforts, 6% (24 of 406) were referred from online advertising. Facebook advertisements were shown over 3 million times (impressions) to 124,476 people, which resulted in 4401 clicks on our advertisement. Of these, 24 people ultimately contacted study staff, 6 underwent prescreening, and 4 enrolled in the study. The cost of online advertising per enrollee was $794 when targeting a general population versus $1426 when accounting for strategies that specifically targeted African Americans or men. By contrast, community fairs, direct mail, or periodicals cost $917, $799, or $436 per enrollee, respectively. Utilization of micro-trials to assess online ads identified subtleties (e.g. use of an advertisement title) that substantially impacted viewer interest in our trial. Online advertisements effectively directed a relevant population to our website, which resulted in new enrollees in the Survivorship Promotion In Reducing IGF-1 Trial at a cost comparable to traditional methods. Costs were

  9. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    International Nuclear Information System (INIS)

    Polan, D; Brady, S; Kaufman, R

    2016-01-01

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using an anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT

  10. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    Energy Technology Data Exchange (ETDEWEB)

    Polan, D [University of Michigan, Ann Arbor, MI (United States); Brady, S; Kaufman, R [St. Jude Children’s Research Hospital, Memphis, TN (United States)

    2016-06-15

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using an anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT

  11. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  12. Online luminosity measurement at BES III

    International Nuclear Information System (INIS)

    Song Wenbo; Fu Chengdong; Mo Xiaohu; He Kanglin; Zhu Kejun; Li Fei; Zhao Shujun

    2010-01-01

    As a crucial parameter of both accelerator and detector, the realization of online luminosity measurement is of great importance. Several methods of luminosity measurement are recapitulated and the emphasis is laid on the algorithm of using e + e - and γγ final states. Taking into account the status at the beginning of the joint commissioning of detector and accelerator, the information from end cap electromagnetic calorimeter is used to select the good event. With the help of online Event filter, the luminosity is calculated and the monitoring of online cross section of hadron is realized. The preliminary results indicate that the online luminosity measurement is stable and its role for machine tuning and monitoring of the overall running status is indispensable. (authors)

  13. Secure image encryption algorithm design using a novel chaos based S-Box

    International Nuclear Information System (INIS)

    Çavuşoğlu, Ünal; Kaçar, Sezgin; Pehlivan, Ihsan; Zengin, Ahmet

    2017-01-01

    Highlights: • A new chaotic system is developed for creating S-Box and image encryption algorithm. • Chaos based random number generator is designed with the help of the new chaotic system. NIST tests are run on generated random numbers to verify randomness. • A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. • The new developed S-Box based image encryption algorithm is introduced and image encryption application is carried out. • To show the quality and strong of the encryption process, security analysis are performed and compared with the AES and chaos algorithms. - Abstract: In this study, an encryption algorithm that uses chaos based S-BOX is developed for secure and speed image encryption. First of all, a new chaotic system is developed for creating S-Box and image encryption algorithm. Chaos based random number generator is designed with the help of the new chaotic system. Then, NIST tests are run on generated random numbers to verify randomness. A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. As the next step, the new developed S-Box based image encryption algorithm is introduced in detail. Finally, image encryption application is carried out. To show the quality and strong of the encryption process, security analysis are performed. Proposed algorithm is compared with the AES and chaos algorithms. According to tests results, the proposed image encryption algorithm is secure and speed for image encryption application.

  14. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  15. Adaptive calibration method with on-line growing complexity

    Directory of Open Access Journals (Sweden)

    Šika Z.

    2011-12-01

    Full Text Available This paper describes a modified variant of a kinematical calibration algorithm. In the beginning, a brief review of the calibration algorithm and its simple modification are described. As the described calibration modification uses some ideas used by the Lolimot algorithm, the algorithm is described and explained. Main topic of this paper is a description of a synthesis of the Lolimot-based calibration that leads to an adaptive algorithm with an on-line growing complexity. The paper contains a comparison of simple examples results and a discussion. A note about future research topics is also included.

  16. Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics.

    Science.gov (United States)

    Fu, Yue; Chai, Tianyou

    2016-12-01

    Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton-Jacobi-Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.

  17. Online interventions for problem gamblers with and without co-occurring mental health symptoms: Protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    John A. Cunningham

    2016-07-01

    Full Text Available Abstract Background Comorbidity between problem gambling and depression or anxiety is common. Further, the treatment needs of people with co-occurring gambling and mental health symptoms may be different from those of problem gamblers who do not have a co-occurring mental health concern. The current randomized controlled trial (RCT will evaluate whether there is a benefit to providing access to mental health Internet interventions (G + MH intervention in addition to an Internet intervention for problem gambling (G-only intervention in participants with gambling problems who do or do not have co-occurring mental health symptoms. Methods Potential participants will be screened using an online survey to identify participants meeting criteria for problem gambling. As part of the baseline screening process, measures of current depression and anxiety will be assessed. Eligible participants agreeing (N = 280 to take part in the study will be randomized to one of two versions of an online intervention for gamblers – an intervention that just targets gambling issues (G-only versus a website that contains interventions for depression and anxiety in addition to an intervention for gamblers (G + MH. It is predicted that problem gamblers who do not have co-occurring mental health symptoms will display no significant difference between intervention conditions at a six-month follow-up. However, for those with co-occurring mental health symptoms, it is predicted that participants receiving access to the G + MH website will display significantly reduced gambling outcomes at six-month follow-up as compared to those provided with G-only website. Discussion The trial will produce information on the best means of providing online help to gamblers with and without co-occurring mental health symptoms. Trial registration ClinicalTrials.gov NCT02800096 ; Registration date: June 14, 2016.

  18. Distribution Bottlenecks in Classification Algorithms

    NARCIS (Netherlands)

    Zwartjes, G.J.; Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Hurink, Johann L.

    2012-01-01

    The abundance of data available on Wireless Sensor Networks makes online processing necessary. In industrial applications for example, the correct operation of equipment can be the point of interest while raw sampled data is of minor importance. Classi﬿cation algorithms can be used to make state

  19. A randomized trial of teen online problem solving for improving executive function deficits following pediatric traumatic brain injury.

    Science.gov (United States)

    Wade, Shari L; Walz, Nicolay C; Carey, JoAnne; Williams, Kendra M; Cass, Jennifer; Herren, Luke; Mark, Erin; Yeates, Keith Owen

    2010-01-01

    To examine the efficacy of teen online problem solving (TOPS) in improving executive function (EF) deficits following traumatic brain injury (TBI) in adolescence. Families of adolescents (aged 11-18 years) with moderate to severe TBI were recruited from the trauma registry of 2 tertiary-care children's hospitals and then randomly assigned to receive TOPS (n = 20), a cognitive-behavioral, skill-building intervention, or access to online resources regarding TBI (Internet resource comparison; n = 21). Parent and teen reports of EF were assessed at baseline and a posttreatment follow-up (mean = 7.88 months later). Improvements in self-reported EF skills were moderated by TBI severity, with teens with severe TBI in the TOPS treatment reporting significantly greater improvements than did those with severe TBI in the Internet resource comparison. The treatment groups did not differ on parent ratings of EF at the follow up. Findings suggest that TOPS may be effective in improving EF skills among teens with severe TBI.

  20. Online cluster-finding algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Tiemens, Marcel

    2017-01-01

    Om zeldzame processen zoals de vorming van exotische deeltjes te kunnen bestuderen, is het PANDA experiment opgezet. Om de grote hoeveelheden data te kunnen verwerken, verwerken de subsystemen de data voor. Een voorbeeld is het algoritme om online naar clusters te zoeken in de data van de

  1. Deciding the On-line Chromatic Number of a Graph with Pre-coloring is PSPACE-complete

    DEFF Research Database (Denmark)

    Kudahl, Christian

    2015-01-01

    In an on-line coloring, the vertices of a graph are revealed one by one. An algorithm assigns a color to each vertex after it is revealed. When a vertex is revealed, it is also revealed which of the previous vertices it is adjacent to. The on-line chromatic number of a graph, G, is the smallest...... number of colors an algorithm will need when on-line-coloring G. The algorithm may know G, but not the order in which the vertices are revealed. The problem of determining if the on-line chromatic number of a graph is less than or equal to k, given a pre-coloring, is shown to be PSPACE-complete....

  2. A Randomized Experiment Testing the Efficacy of a Scheduling Nudge in a Massive Open Online Course (MOOC

    Directory of Open Access Journals (Sweden)

    Rachel Baker

    2016-10-01

    Full Text Available An increasing number of students are taking classes offered online through open-access platforms; however, the vast majority of students who start these classes do not finish. The incongruence of student intentions and subsequent engagement suggests that self-control is a major contributor to this stark lack of persistence. This study presents the results of a large-scale field experiment (N = 18,043 that examines the effects of a self-directed scheduling nudge designed to promote student persistence in a massive open online course. We find that random assignment to treatment had no effects on near-term engagement and weakly significant negative effects on longer-term course engagement, persistence, and performance. Interestingly, these negative effects are highly concentrated in two groups of students: those who registered close to the first day of class and those with .edu e-mail addresses. We consider several explanations for these findings and conclude that theoretically motivated interventions may interact with the diverse motivations of individual students in possibly unintended ways.

  3. Interactive animation of fault-tolerant parallel algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Apgar, S.W.

    1992-02-01

    Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes.

  4. Primary Repair of Moderate Severity Rhegmatogenous Retinal Detachment: A Critical Decision-Making Algorithm.

    Science.gov (United States)

    Velez-Montoya, Raul; Jacobo-Oceguera, Paola; Flores-Preciado, Javier; Dalma-Weiszhausz, Jose; Guerrero-Naranjo, Jose; Salcedo-Villanueva, Guillermo; Garcia-Aguirre, Gerardo; Fromow-Guerra, Jans; Morales-Canton, Virgilio

    2016-01-01

    We reviewed all the available data regarding the current management of non-complex rhegmatogenous retinal detachment and aimed to propose a new decision-making algorithm aimed to improve the single surgery success rate for mid-severity rhegmatogenous retinal detachment. An online review of the Pubmed database was performed. We searched for all available manuscripts about the anatomical and functional outcomes after the surgical management, by either scleral buckle or primary pars plana vitrectomy, of retinal detachment. The search was limited to articles published from January 1995 to December 2015. All articles obtained from the search were carefully screened and their references were manually reviewed for additional relevant data. Our search specifically focused on preoperative clinical data that were associated with the surgical outcomes. After categorizing the available data according to their level of evidence, with randomized-controlled clinical trials as the highest possible level of evidence, followed by retrospective studies, and retrospective case series as the lowest level of evidence, we proceeded to design a logical decision-making algorithm, enhanced by our experiences as retinal surgeons. A total of 7 randomized-controlled clinical trials, 19 retrospective studies, and 9 case series were considered. Additional articles were also included in order to support the observations further. Rhegmatogenous retinal detachment is a potentially blinding disorder. Its surgical management seems to depend more on a surgeon´s preference than solid scientific data or is based on a good clinical history and examination. The algorithms proposed herein strive to offer a more rational approach to improve both anatomical and functional outcomes after the first surgery.

  5. Adaptive sensor fusion using genetic algorithms

    International Nuclear Information System (INIS)

    Fitzgerald, D.S.; Adams, D.G.

    1994-01-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ''fuzzy'' sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion

  6. Online Cable Tester and Rerouter

    Science.gov (United States)

    Lewis, Mark; Medelius, Pedro

    2012-01-01

    Hardware and algorithms have been developed to transfer electrical power and data connectivity safely, efficiently, and automatically from an identified damaged/defective wire in a cable to an alternate wire path. The combination of online cable testing capabilities, along with intelligent signal rerouting algorithms, allows the user to overcome the inherent difficulty of maintaining system integrity and configuration control, while autonomously rerouting signals and functions without introducing new failure modes. The incorporation of this capability will increase the reliability of systems by ensuring system availability during operations.

  7. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    Science.gov (United States)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  8. On-line Adaptive Radiation Treatment of Prostate Cancer

    National Research Council Canada - National Science Library

    Zhang, Tiezhi

    2008-01-01

    .... The specific aims of this project are to develop the key technical components for online adaptive treatment, which include parallel deformable image registration algorithm, parallel dose calculation...

  9. Model-based online optimisation. Pt. 1: active learning; Modellbasierte Online-Optimierung moderner Verbrennungsmotoren. T. 1: Aktives Lernen

    Energy Technology Data Exchange (ETDEWEB)

    Poland, J.; Knoedler, K.; Zell, A. [Tuebingen Univ. (Germany). Lehrstuhl fuer Rechnerarchitektur; Fleischhauer, T.; Mitterer, A.; Ullmann, S. [BMW Group (Germany)

    2003-05-01

    This two-part article presents the model-based optimisation algorithm ''mbminimize''. It was developed in a corporate project of the University Tuebingen and the BMW Group for the purpose of optimising internal combustion engines online on the engine test bed. The first part concentrates on the basic algorithmic design, as well as on modelling, experimental design and active learning. The second part will discuss strategies for dealing with limits such as knocking. (orig.) [German] Dieser zweiteilige Beitrag stellt den modellbasierten Optimierungsalgorithmus ''mbminimize'' vor, der in Kooperation von der Universitaet Tuebingen und der BMW Group fuer die Online-Optimierung von Verbrennungsmotoren entwickelt wurde. Der vorliegende erste Teil konzentriert sich auf das grundlegende algorithmische Design, auf Modellierung, Versuchsplanung und aktives Lernen. Der zweite Teil diskutiert Strategien zur Behandlung von Limits wie Motorklopfen.

  10. Challenge Online Time Series Clustering For Demand Response A Theory to Break the ‘Curse of Dimensionality'

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Ranjan [Univ. of Southern California, Los Angeles, CA (United States); Chelmis, Charalampos [Univ. of Southern California, Los Angeles, CA (United States); Aman, Saima [Univ. of Southern California, Los Angeles, CA (United States); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-07-15

    The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethink the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.

  11. Online wave estimation using vessel motion measurements

    DEFF Research Database (Denmark)

    H. Brodtkorb, Astrid; Nielsen, Ulrik D.; J. Sørensen, Asgeir

    2018-01-01

    parameters and motion transfer functions are required as input. Apart from this the method is signal-based, with no assumptions on the wave spectrum shape, and as a result it is computationally efficient. The algorithm is implemented in a dynamic positioning (DP)control system, and tested through simulations......In this paper, a computationally efficient online sea state estimation algorithm isproposed for estimation of the on site sea state. The algorithm finds the wave spectrum estimate from motion measurements in heave, roll and pitch by iteratively solving a set of linear equations. The main vessel...

  12. Online hyperspectral imaging system for evaluating quality of agricultural products

    Science.gov (United States)

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk

    2017-06-01

    The consumption of fresh-cut agricultural produce in Korea has been growing. The browning of fresh-cut vegetables that occurs during storage and foreign substances such as worms and slugs are some of the main causes of consumers' concerns with respect to safety and hygiene. The purpose of this study is to develop an on-line system for evaluating quality of agricultural products using hyperspectral imaging technology. The online evaluation system with single visible-near infrared hyperspectral camera in the range of 400 nm to 1000 nm that can assess quality of both surfaces of agricultural products such as fresh-cut lettuce was designed. Algorithms to detect browning surface were developed for this system. The optimal wavebands for discriminating between browning and sound lettuce as well as between browning lettuce and the conveyor belt were investigated using the correlation analysis and the one-way analysis of variance method. The imaging algorithms to discriminate the browning lettuces were developed using the optimal wavebands. The ratio image (RI) algorithm of the 533 nm and 697 nm images (RI533/697) for abaxial surface lettuce and the ratio image algorithm (RI533/697) and subtraction image (SI) algorithm (SI538-697) for adaxial surface lettuce had the highest classification accuracies. The classification accuracy of browning and sound lettuce was 100.0% and above 96.0%, respectively, for the both surfaces. The overall results show that the online hyperspectral imaging system could potentially be used to assess quality of agricultural products.

  13. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  14. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  15. OnlineMin: A Fast Strongly Competitive Randomized Paging Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel; Negoescu, Andrei

    2012-01-01

    approach that both has optimal competitiveness and selects victim pages in subquadratic time. In fact, if k pages fit in internal memory the best previous solution required O(k 2) time per request and O(k) space, whereas our approach takes also O(k) space, but only O(logk) time in the worst case per page...

  16. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  17. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  18. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  19. Toward Adversarial Online Learning and the Science of Deceptive Machines

    Science.gov (United States)

    2015-11-14

    games (Blum and Monsour 2007). We will define a notion of regret for secure online learning with respect to a static learner. The value of an action a......anomaly detection methods. The avail- ability of large amount of data requires the processing of data in a “streaming” fashion with online algorithms. Yet

  20. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    Science.gov (United States)

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  1. A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Apisit Eiumnoh

    2013-10-01

    Full Text Available Traditionally, image registration of multi-modal and multi-temporal images is performed satisfactorily before land cover mapping. However, since multi-modal and multi-temporal images are likely to be obtained from different satellite platforms and/or acquired at different times, perfect alignment is very difficult to achieve. As a result, a proper land cover mapping algorithm must be able to correct registration errors as well as perform an accurate classification. In this paper, we propose a joint classification and registration technique based on a Markov random field (MRF model to simultaneously align two or more images and obtain a land cover map (LCM of the scene. The expectation maximization (EM algorithm is employed to solve the joint image classification and registration problem by iteratively estimating the map parameters and approximate posterior probabilities. Then, the maximum a posteriori (MAP criterion is used to produce an optimum land cover map. We conducted experiments on a set of four simulated images and one pair of remotely sensed images to investigate the effectiveness and robustness of the proposed algorithm. Our results show that, with proper selection of a critical MRF parameter, the resulting LCMs derived from an unregistered image pair can achieve an accuracy that is as high as when images are perfectly aligned. Furthermore, the registration error can be greatly reduced.

  2. Aproximační a online algoritmy

    OpenAIRE

    Tichý, Tomáš

    2008-01-01

    This thesis presents results of our research in the area of optimization problems with incomplete information-our research is focused on the online scheduling problems. Our research is based on the worst-case analysis of studied problems and algorithms; thus we use methods of the competitive analysis during our research. Althrough there are many "real-world" industrial and theoretical applications of the online scheduling problems there are still so many open problems with so simple descripti...

  3. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  4. Stochastic Online Learning in Dynamic Networks under Unknown Models

    Science.gov (United States)

    2016-08-02

    The key is to develop online learning strategies at each individual node. Specifically, through local information exchange with its neighbors, each...infinitely repeated game with incomplete information and developed a dynamic pricing strategy referred to as Competitive and Cooperative Demand Learning...Stochastic Online Learning in Dynamic Networks under Unknown Models This research aims to develop fundamental theories and practical algorithms for

  5. Random number generators and the Metropolis algorithm: application to various problems in physics and mechanics as an introduction to computational physics

    International Nuclear Information System (INIS)

    Calvayrac, Florent

    2005-01-01

    We present known and new applications of pseudo random numbers and of the Metropolis algorithm to phenomena of physical and mechanical interest, such as the search of simple clusters isomers with interactive visualization, or vehicle motion planning. The progression towards complicated problems was used with first-year graduate students who wrote most of the programs presented here. We argue that the use of pseudo random numbers in simulation and extrema research programs in teaching numerical methods in physics allows one to get quick programs and physically meaningful and demonstrative results without recurring to the advanced numerical analysis methods

  6. Online Learning of Commission Avoidant Portfolio Ensembles

    OpenAIRE

    Uziel, Guy; El-Yaniv, Ran

    2016-01-01

    We present a novel online ensemble learning strategy for portfolio selection. The new strategy controls and exploits any set of commission-oblivious portfolio selection algorithms. The strategy handles transaction costs using a novel commission avoidance mechanism. We prove a logarithmic regret bound for our strategy with respect to optimal mixtures of the base algorithms. Numerical examples validate the viability of our method and show significant improvement over the state-of-the-art.

  7. Effects of video-based, online education on behavioral and knowledge outcomes in sunscreen use: a randomized controlled trial.

    Science.gov (United States)

    Armstrong, April W; Idriss, Nayla Z; Kim, Randie H

    2011-05-01

    To compare online video and pamphlet education at improving patient comprehension and adherence to sunscreen use, and to assess patient satisfaction with the two educational approaches. In a randomized controlled trial, 94 participants received either online, video-based education or pamphlet-based education that described the importance and proper use of sunscreen. Sun protective knowledge and sunscreen application behaviors were assessed at baseline and 12 weeks after group-specific intervention. Participants in both groups had similar levels of baseline sunscreen knowledge. Post-study analysis revealed significantly greater improvement in the knowledge scores from video group members compared to the pamphlet group (p=0.003). More importantly, video group participants reported greater sunscreen adherence (peducation vehicle more useful and appealing than the pamphlet group (peducational tool for teaching sun protective knowledge and encouraging sunscreen use than written materials. More effective patient educational methods to encourage sun protection activities, such as regular sunscreen use, have the potential to increase awareness and foster positive, preventative health behaviors against skin cancers. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    OpenAIRE

    Robert Ramon de Carvalho Sousa; Abimael de Jesus Barros Costa; Eliezé Bulhões de Carvalho; Adriano de Carvalho Paranaíba; Daylyne Maerla Gomes Lima Sandoval

    2016-01-01

    This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW) algorithm (1964) in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (on...

  9. Cost-effectiveness of i-Sleep, a guided online CBT intervention, for patients with insomnia in general practice: protocol of a pragmatic randomized controlled trial.

    Science.gov (United States)

    van der Zweerde, Tanja; Lancee, Jaap; Slottje, Pauline; Bosmans, Judith; Van Someren, Eus; Reynolds, Charles; Cuijpers, Pim; van Straten, Annemieke

    2016-04-02

    Insomnia is a highly prevalent disorder causing clinically significant distress and impairment. Furthermore, insomnia is associated with high societal and individual costs. Although cognitive behavioural treatment for insomnia (CBT-I) is the preferred treatment, it is not used often. Offering CBT-I in an online format may increase access. Many studies have shown that online CBT for insomnia is effective. However, these studies have all been performed in general population samples recruited through media. This protocol article presents the design of a study aimed at establishing feasibility, effectiveness and cost-effectiveness of a guided online intervention (i-Sleep) for patients suffering from insomnia that seek help from their general practitioner as compared to care-as-usual. In a pragmatic randomized controlled trial, adult patients with insomnia disorder recruited through general practices are randomized to a 5-session guided online treatment, which is called "i-Sleep", or to care-as-usual. Patients in the care-as-usual condition will be offered i-Sleep 6 months after inclusion. An ancillary clinician, known as the psychological well-being practitioner who works in the GP practice (PWP; in Dutch: POH-GGZ), will offer online support after every session. Our aim is to recruit one hundred and sixty patients. Questionnaires, a sleep diary and wrist actigraphy will be administered at baseline, post intervention (at 8 weeks), and at 6 months and 12 months follow-up. Effectiveness will be established using insomnia severity as the main outcome. Cost-effectiveness and cost-utility (using costs per quality adjusted life year (QALY) as outcome) will be conducted from a societal perspective. Secondary measures are: sleep diary, daytime consequences, fatigue, work and social adjustment, anxiety, alcohol use, depression and quality of life. The results of this trial will help establish whether online CBT-I is (cost-) effective and feasible in general practice as compared

  10. How random are random numbers generated using photons?

    International Nuclear Information System (INIS)

    Solis, Aldo; Angulo Martínez, Alí M; Ramírez Alarcón, Roberto; Cruz Ramírez, Hector; U’Ren, Alfred B; Hirsch, Jorge G

    2015-01-01

    Randomness is fundamental in quantum theory, with many philosophical and practical implications. In this paper we discuss the concept of algorithmic randomness, which provides a quantitative method to assess the Borel normality of a given sequence of numbers, a necessary condition for it to be considered random. We use Borel normality as a tool to investigate the randomness of ten sequences of bits generated from the differences between detection times of photon pairs generated by spontaneous parametric downconversion. These sequences are shown to fulfil the randomness criteria without difficulties. As deviations from Borel normality for photon-generated random number sequences have been reported in previous work, a strategy to understand these diverging findings is outlined. (paper)

  11. Preemptive Online Scheduling: Optimal Algorithms for All Speeds

    Czech Academy of Sciences Publication Activity Database

    Ebenlendr, Tomáš; Jawor, W.; Sgall, Jiří

    2009-01-01

    Roč. 53, č. 4 (2009), s. 504-522 ISSN 0178-4617 R&D Projects: GA MŠk(CZ) 1M0545; GA ČR GA201/05/0124; GA AV ČR IAA1019401 Institutional research plan: CEZ:AV0Z10190503 Keywords : anline algorithms * scheduling Subject RIV: IN - Informatics, Computer Science Impact factor: 0.917, year: 2009

  12. Online learning in repeated auctions

    OpenAIRE

    Weed, Jonathan; Perchet, Vianney; Rigollet, Philippe

    2015-01-01

    Motivated by online advertising auctions, we consider repeated Vickrey auctions where goods of unknown value are sold sequentially and bidders only learn (potentially noisy) information about a good's value once it is purchased. We adopt an online learning approach with bandit feedback to model this problem and derive bidding strategies for two models: stochastic and adversarial. In the stochastic model, the observed values of the goods are random variables centered around the true value of t...

  13. Automated seismic detection of landslides at regional scales: a Random Forest based detection algorithm

    Science.gov (United States)

    Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.

    2017-12-01

    Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years

  14. Information needs of generalists and specialists using online best-practice algorithms to answer clinical questions.

    Science.gov (United States)

    Cook, David A; Sorensen, Kristi J; Linderbaum, Jane A; Pencille, Laurie J; Rhodes, Deborah J

    2017-07-01

    To better understand clinician information needs and learning opportunities by exploring the use of best-practice algorithms across different training levels and specialties. We developed interactive online algorithms (care process models [CPMs]) that integrate current guidelines, recent evidence, and local expertise to represent cross-disciplinary best practices for managing clinical problems. We reviewed CPM usage logs from January 2014 to June 2015 and compared usage across specialty and provider type. During the study period, 4009 clinicians (2014 physicians in practice, 1117 resident physicians, and 878 nurse practitioners/physician assistants [NP/PAs]) viewed 140 CPMs a total of 81 764 times. Usage varied from 1 to 809 views per person, and from 9 to 4615 views per CPM. Residents and NP/PAs viewed CPMs more often than practicing physicians. Among 2742 users with known specialties, generalists ( N  = 1397) used CPMs more often (mean 31.8, median 7 views) than specialists ( N  = 1345; mean 6.8, median 2; P  < .0001). The topics used by specialists largely aligned with topics within their specialties. The top 20% of available CPMs (28/140) collectively accounted for 61% of uses. In all, 2106 clinicians (52%) returned to the same CPM more than once (average 7.8 views per topic; median 4, maximum 195). Generalists revisited topics more often than specialists (mean 8.8 vs 5.1 views per topic; P  < .0001). CPM usage varied widely across topics, specialties, and individual clinicians. Frequently viewed and recurrently viewed topics might warrant special attention. Specialists usually view topics within their specialty and may have unique information needs. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    Science.gov (United States)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  16. Emergence of an optimal search strategy from a simple random walk.

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  17. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  18. Comparison of Online Versus Classroom Delivery of an Immunization Elective Course

    Science.gov (United States)

    Pitterle, Michael E.; Hayney, Mary S.

    2014-01-01

    Objective. To compare performance and preferences of students who were randomly allocated to classroom or online sections of an elective course on immunization. Methods. Students were randomly assigned to either the classroom or online section. All course activities (lectures, quizzes, case discussions, vaccine administration, and final examination) were the same for both sections, except for the delivery of lecture material. Assessment. Students were surveyed on their preferences at the beginning and end of the semester. At the end of the semester, the majority of students in the classroom group preferred classroom or blended delivery while the majority of students in the online group preferred blended or online delivery (pcompared at the end of the semester. There was no significant difference for any of the grades in the course between the 2 sections. Conclusion. There was no difference in student performance between the classroom and online sections, suggesting that online delivery is an effective way to teach students about immunization. PMID:24954936

  19. Internet-based implementation of non-pharmacological interventions of the "people getting a grip on arthritis" educational program: an international online knowledge translation randomized controlled trial design protocol.

    Science.gov (United States)

    Brosseau, Lucie; Wells, George; Brooks-Lineker, Sydney; Bennell, Kim; Sherrington, Cathie; Briggs, Andrew; Sturnieks, Daina; King, Judy; Thomas, Roanne; Egan, Mary; Loew, Laurianne; De Angelis, Gino; Casimiro, Lynn; Toupin April, Karine; Cavallo, Sabrina; Bell, Mary; Ahmed, Rukhsana; Coyle, Doug; Poitras, Stéphane; Smith, Christine; Pugh, Arlanna; Rahman, Prinon

    2015-02-03

    Rheumatoid arthritis (RA) affects 2.1% of the Australian population (1.5% males; 2.6% females), with the highest prevalence from ages 55 to over 75 years (4.4-6.1%). In Canada, RA affects approximately 0.9% of adults, and within 30 years that is expected to increase to 1.3%. With an aging population and a greater number of individuals with modifiable risk factors for chronic diseases, such as arthritis, there is an urgent need for co-care management of arthritic conditions. The increasing trend and present shifts in the health services and policy sectors suggest that digital information delivery is becoming more prominent. Therefore, it is necessary to further investigate the use of online resources for RA information delivery. The objective is to examine the effect of implementing an online program provided to patients with RA, the People Getting a Grip on Arthritis for RA (PGrip-RA) program, using information communication technologies (ie, Facebook and emails) in combination with arthritis health care professional support and electronic educational pamphlets. We believe this can serve as a useful and economical method of knowledge translation (KT). This KT randomized controlled trial will use a prospective randomized open-label blinded-endpoint design to compare four different intervention approaches of the PGrip-RA program to a control group receiving general electronic educational pamphlets self-management in RA via email. Depending on group allocation, links to the Arthritis Society PGrip-RA material will be provided either through Facebook or by email. One group will receive feedback online from trained health care professionals. The intervention period is 6 weeks. Participants will have access to the Internet-based material after the completion of the baseline questionnaires until the final follow-up questionnaire at 6 months. We will invite 396 patients from Canadian and Australian Arthritis Consumers' Associations to participate using online recruitment

  20. Convergence of a random walk method for the Burgers equation

    International Nuclear Information System (INIS)

    Roberts, S.

    1985-10-01

    In this paper we consider a random walk algorithm for the solution of Burgers' equation. The algorithm uses the method of fractional steps. The non-linear advection term of the equation is solved by advecting ''fluid'' particles in a velocity field induced by the particles. The diffusion term of the equation is approximated by adding an appropriate random perturbation to the positions of the particles. Though the algorithm is inefficient as a method for solving Burgers' equation, it does model a similar method, the random vortex method, which has been used extensively to solve the incompressible Navier-Stokes equations. The purpose of this paper is to demonstrate the strong convergence of our random walk method and so provide a model for the proof of convergence for more complex random walk algorithms; for instance, the random vortex method without boundaries

  1. Particle identification algorithms for the PANDA Endcap Disc DIRC

    Science.gov (United States)

    Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.

    2017-12-01

    The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.

  2. Ranking online quality and reputation via the user activity

    Science.gov (United States)

    Liu, Xiao-Lu; Guo, Qiang; Hou, Lei; Cheng, Can; Liu, Jian-Guo

    2015-10-01

    How to design an accurate algorithm for ranking the object quality and user reputation is of importance for online rating systems. In this paper we present an improved iterative algorithm for online ranking object quality and user reputation in terms of the user degree (IRUA), where the user's reputation is measured by his/her rating vector, the corresponding objects' quality vector and the user degree. The experimental results for the empirical networks show that the AUC values of the IRUA algorithm can reach 0.9065 and 0.8705 in Movielens and Netflix data sets, respectively, which is better than the results generated by the traditional iterative ranking methods. Meanwhile, the results for the synthetic networks indicate that user degree should be considered in real rating systems due to users' rating behaviors. Moreover, we find that enhancing or reducing the influences of the large-degree users could produce more accurate reputation ranking lists.

  3. Effects of preventive online mindfulness interventions on stress and mindfulness: A meta-analysis of randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Wasantha P. Jayawardene, MD, PhD

    2017-03-01

    Full Text Available Empirical evidence suggested that mind-body interventions can be effectively delivered online. This study aimed to examine whether preventive online mindfulness interventions (POMI for non-clinical populations improve short- and long-term outcomes for perceived-stress (primary and mindfulness (secondary. Systematic search of four electronic databases, manuscript reference lists, and journal content lists was conducted in 2016, using 21 search-terms. Eight randomized controlled trials (RCTs evaluating effects of POMI in non-clinical populations with adequately reported perceived-stress and mindfulness measures pre- and post-intervention were included. Random-effects models utilized for all effect-size estimations with meta-regression performed for mean age and %females. Participants were volunteers (adults; predominantly female from academic, workplace, or community settings. Most interventions utilized simplified Mindfulness-Based Stress Reduction protocols over 2–12 week periods. Post-intervention, significant medium effect found for perceived-stress (g = 0.432, with moderate heterogeneity and significant, but small, effect size for mindfulness (g = 0.275 with low heterogeneity; highest effects were for middle-aged individuals. At follow-up, significant large effect found for perceived-stress (g = 0.699 with low heterogeneity and significant medium effect (g = 0.466 for mindfulness with high heterogeneity. No publication bias was found for perceived-stress; publication bias found for mindfulness outcomes led to underestimation of effects, not overestimation. Number of eligible RCTs was low with inadequate data reporting in some studies. POMI had substantial stress reduction effects and some mindfulness improvement effects. POMI can be a more convenient and cost-effective strategy, compared to traditional face-to-face interventions, especially in the context of busy, hard-to-reach, but digitally-accessible populations.

  4. Medium-term effectiveness of online behavioral training in migraine self-management: A randomized trial controlled over 10 months.

    Science.gov (United States)

    Sorbi, M J; Kleiboer, A M; van Silfhout, H G; Vink, G; Passchier, J

    2015-06-01

    This randomized, controlled trial examined the medium-term effectiveness of online behavioral training in migraine self-management (oBT; N = 195) versus waitlist control (WLC; N = 173) on attack frequency, indicators of self-management (primary outcomes), headache top intensity, use of rescue medications, quality of life and disability (secondary outcomes). An online headache diary following the ICHD-II and questionnaires were completed at baseline (T0), post-training (T1) and six months later (T2). Missing data (T1: 24%; T2: 37%) were handled by multiple imputation. We established effect sizes (ES) and tested between-group differences over time with linear mixed modelling techniques based on the intention-to-treat principle. At T2, attack frequency had improved significantly in oBT (-23%, ES = 0.66) but also in WLC (-19%; ES = 0.52). Self-efficacy, internal and external control in migraine management--and triptan use--improved only in oBT, however. This indicates different processes in both groups and could signify (the start of) active self-management in oBT. Also, only oBT improved migraine-specific quality of life to a sizable extent. oBT produced self-management gains but could not account for improved attack frequency, because WLC improved as well. The perspective that BT effects develop gradually, and that online delivery will boost BT outreach, justifies further research. © International Headache Society 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  5. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  6. Attitude of students towards online shopping of agricultural products ...

    African Journals Online (AJOL)

    The study examined the attitude of students towards online shopping in selected tertiary institutions in Ogun state. One hundred and thirty-five respondents were sampled using multistage and simple random sampling procedures. Variables measured included their attitude towards online shopping, the factors that affect ...

  7. Probabilistic Signal Recovery and Random Matrices

    Science.gov (United States)

    2016-12-08

    that classical methods for linear regression (such as Lasso) are applicable for non- linear data. This surprising finding has already found several...we studied the complexity of convex sets. In numerical linear algebra , we analyzed the fastest known randomized approximation algorithm for...and perfect matchings In numerical linear algebra , we studied the fastest known randomized approximation algorithm for computing the permanents of

  8. Improving Student Retention in Online College Classes: Qualitative Insights from Faculty

    Science.gov (United States)

    Russo-Gleicher, Rosalie J.

    2014-01-01

    This article provides qualitative insights into addressing the issue of student retention in online classes in higher education. Semi-structured, in-depth interviews were conducted at random with 16 faculty who teach online courses at a large community college in the Northeast about how to improve online student retention. Qualitative analysis…

  9. [Plaque segmentation of intracoronary optical coherence tomography images based on K-means and improved random walk algorithm].

    Science.gov (United States)

    Wang, Guanglei; Wang, Pengyu; Han, Yechen; Liu, Xiuling; Li, Yan; Lu, Qian

    2017-06-01

    In recent years, optical coherence tomography (OCT) has developed into a popular coronary imaging technology at home and abroad. The segmentation of plaque regions in coronary OCT images has great significance for vulnerable plaque recognition and research. In this paper, a new algorithm based on K -means clustering and improved random walk is proposed and Semi-automated segmentation of calcified plaque, fibrotic plaque and lipid pool was achieved. And the weight function of random walk is improved. The distance between the edges of pixels in the image and the seed points is added to the definition of the weight function. It increases the weak edge weights and prevent over-segmentation. Based on the above methods, the OCT images of 9 coronary atherosclerotic patients were selected for plaque segmentation. By contrasting the doctor's manual segmentation results with this method, it was proved that this method had good robustness and accuracy. It is hoped that this method can be helpful for the clinical diagnosis of coronary heart disease.

  10. Online games: a novel approach to explore how partial information influences human random searches

    Science.gov (United States)

    Martínez-García, Ricardo; Calabrese, Justin M.; López, Cristóbal

    2017-01-01

    Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.

  11. Generating equilateral random polygons in confinement II

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Montemayor, A; Ziegler, U

    2012-01-01

    In this paper we continue an earlier study (Diao et al 2011 J. Phys. A: Math. Theor. 44 405202) on the generation algorithms of random equilateral polygons confined in a sphere. Here, the equilateral random polygons are rooted at the center of the confining sphere and the confining sphere behaves like an absorbing boundary. One way to generate such a random polygon is the accept/reject method in which an unconditioned equilateral random polygon rooted at origin is generated. The polygon is accepted if it is within the confining sphere, otherwise it is rejected and the process is repeated. The algorithm proposed in this paper offers an alternative to the accept/reject method, yielding a faster generation process when the confining sphere is small. In order to use this algorithm effectively, a large, reusable data set needs to be pre-computed only once. We derive the theoretical distribution of the given random polygon model and demonstrate, with strong numerical evidence, that our implementation of the algorithm follows this distribution. A run time analysis and a numerical error estimate are given at the end of the paper. (paper)

  12. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  13. An adaptive inverse kinematics algorithm for robot manipulators

    Science.gov (United States)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  14. Quantumness, Randomness and Computability

    International Nuclear Information System (INIS)

    Solis, Aldo; Hirsch, Jorge G

    2015-01-01

    Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics. (paper)

  15. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  16. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    Science.gov (United States)

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  17. The effect of heterogeneous dynamics of online users on information filtering

    International Nuclear Information System (INIS)

    Chen, Bo-Lun; Zeng, An; Chen, Ling

    2015-01-01

    The rapid expansion of the Internet requires effective information filtering techniques to extract the most essential and relevant information for online users. Many recommendation algorithms have been proposed to predict the future items that a given user might be interested in. However, there is an important issue that has always been ignored so far in related works, namely the heterogeneous dynamics of online users. The interest of active users changes more often than that of less active users, which asks for different update frequency of their recommendation lists. In this paper, we develop a framework to study the effect of heterogeneous dynamics of users on the recommendation performance. We find that the personalized application of recommendation algorithms results in remarkable improvement in the recommendation accuracy and diversity. Our findings may help online retailers make better use of the existing recommendation methods. - Highlights: • We study the effect of heterogeneous dynamics of users on recommendation. • Due to the user heterogeneity, their amount of links in the probe set is different. • The personalized algorithm implementation improves the recommendation performance. • Our results suggest different update frequency for users – recommendation list.

  18. The effect of heterogeneous dynamics of online users on information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Bo-Lun [Department of Computer Science, Yangzhou University of China, Yangzhou 225127 (China); Department of Computer Science, Nanjing University of Aeronautics and Astronautics of China, Nanjing 210016 (China); Department of Physics, University of Fribourg, Chemin du Musee 3, CH-1700 Fribourg (Switzerland); Zeng, An, E-mail: anzeng@bnu.edu.cn [School of Systems Science, Beijing Normal University, Beijing 100875 (China); Chen, Ling [Department of Computer Science, Yangzhou University of China, Yangzhou 225127 (China); Department of Computer Science, Nanjing University of Aeronautics and Astronautics of China, Nanjing 210016 (China)

    2015-11-06

    The rapid expansion of the Internet requires effective information filtering techniques to extract the most essential and relevant information for online users. Many recommendation algorithms have been proposed to predict the future items that a given user might be interested in. However, there is an important issue that has always been ignored so far in related works, namely the heterogeneous dynamics of online users. The interest of active users changes more often than that of less active users, which asks for different update frequency of their recommendation lists. In this paper, we develop a framework to study the effect of heterogeneous dynamics of users on the recommendation performance. We find that the personalized application of recommendation algorithms results in remarkable improvement in the recommendation accuracy and diversity. Our findings may help online retailers make better use of the existing recommendation methods. - Highlights: • We study the effect of heterogeneous dynamics of users on recommendation. • Due to the user heterogeneity, their amount of links in the probe set is different. • The personalized algorithm implementation improves the recommendation performance. • Our results suggest different update frequency for users – recommendation list.

  19. Unsupervised online classifier in sleep scoring for sleep deprivation studies.

    Science.gov (United States)

    Libourel, Paul-Antoine; Corneyllie, Alexandra; Luppi, Pierre-Hervé; Chouvet, Guy; Gervasoni, Damien

    2015-05-01

    This study was designed to evaluate an unsupervised adaptive algorithm for real-time detection of sleep and wake states in rodents. We designed a Bayesian classifier that automatically extracts electroencephalogram (EEG) and electromyogram (EMG) features and categorizes non-overlapping 5-s epochs into one of the three major sleep and wake states without any human supervision. This sleep-scoring algorithm is coupled online with a new device to perform selective paradoxical sleep deprivation (PSD). Controlled laboratory settings for chronic polygraphic sleep recordings and selective PSD. Ten adult Sprague-Dawley rats instrumented for chronic polysomnographic recordings. The performance of the algorithm is evaluated by comparison with the score obtained by a human expert reader. Online detection of PS is then validated with a PSD protocol with duration of 72 hours. Our algorithm gave a high concordance with human scoring with an average κ coefficient > 70%. Notably, the specificity to detect PS reached 92%. Selective PSD using real-time detection of PS strongly reduced PS amounts, leaving only brief PS bouts necessary for the detection of PS in EEG and EMG signals (4.7 ± 0.7% over 72 h, versus 8.9 ± 0.5% in baseline), and was followed by a significant PS rebound (23.3 ± 3.3% over 150 minutes). Our fully unsupervised data-driven algorithm overcomes some limitations of the other automated methods such as the selection of representative descriptors or threshold settings. When used online and coupled with our sleep deprivation device, it represents a better option for selective PSD than other methods like the tedious gentle handling or the platform method. © 2015 Associated Professional Sleep Societies, LLC.

  20. Online and In-Person Nutrition Education Improves Breakfast Knowledge, Attitudes, and Behaviors: A Randomized Trial of Participants in the Special Supplemental Nutrition Program for Women, Infants, and Children.

    Science.gov (United States)

    Au, Lauren E; Whaley, Shannon; Rosen, Nila J; Meza, Martha; Ritchie, Lorrene D

    2016-03-01

    Although in-person education is expected to remain central to the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) service delivery, effective online nutrition education has the potential for increased exposure to quality education and a positive influence on nutrition behaviors in WIC participants. Education focused on promoting healthy breakfast behaviors is an important topic for WIC participants because breakfast eating compared with breakfast skipping has been associated with a higher-quality diet and decreased risk for obesity. To examine the influences of online and in-person group nutrition education on changes in knowledge, attitudes, and behaviors related to breakfast eating. Randomized-controlled trial comparing the effectiveness of online and in-person nutrition education between March and September 2014. Five hundred ninety WIC participants from two Los Angeles, CA, WIC clinics were randomly assigned to receive in-person group education (n=359) or online education (n=231). Education focused on ways to reduce breakfast skipping and promoted healthy options at breakfast for parents and their 1- to 5-year-old children participating in WIC. Questionnaires assessing breakfast-related knowledge, attitudes, and behaviors were administered before and after education, and at a 2- to 4-month follow-up. Changes within and between in-person and online groups were compared using t tests and χ(2) tests. Analysis of covariance and generalized estimating equations were used to assess differences in change between groups. Changes in knowledge between pretest and follow-up at 2 to 4 months were similar between groups. Both groups reported reductions in barriers to eating breakfast due to time constraints, not having enough foods at home, and difficulty with preparation. Increases in the frequency of eating breakfast were greater for both the parent (P=0.0007) and child (P=0.01) in the online group compared with the in-person group during

  1. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  2. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    International Nuclear Information System (INIS)

    Strauss, E

    2012-01-01

    We present an online measurement of the LHC beamspot parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beamspot values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual bunch crossings have allowed for studies of single-bunch distributions as well as the behavior of bunch trains. This talk will cover the constraints imposed by the online environment and describe how these measurements are accomplished with the given resources. The algorithm tasks must be completed within the time constraints of the Level 2 trigger, with limited CPU and bandwidth allocations. This places an emphasis on efficient algorithm design and the minimization of data requests.

  3. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Robert Ramon de Carvalho Sousa

    2016-06-01

    Full Text Available This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW algorithm (1964 in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (one way routes and in the level of result variation.

  4. Improvement of the Gravitational Search Algorithm by means of Low-Discrepancy Sobol Quasi Random-Number Sequence Based Initialization

    Directory of Open Access Journals (Sweden)

    ALTINOZ, O. T.

    2014-08-01

    Full Text Available Nature-inspired optimization algorithms can obtain the optima by updating the position of each member in the population. At the beginning of the algorithm, the particles of the population are spread into the search space. The initial distribution of particles corresponds to the beginning points of the search process. Hence, the aim is to alter the position for each particle beginning with this initial position until the optimum solution will be found with respect to the pre-determined conditions like maximum iteration, and specific error value for the fitness function. Therefore, initial positions of the population have a direct effect on both accuracy of the optima and the computational cost. If any member in the population is close enough to the optima, this eases the achievement of the exact solution. On the contrary, individuals grouped far away from the optima might yield pointless efforts. In this study, low-discrepancy quasi-random number sequence is preferred for the localization of the population at the initialization phase. By this way, the population is distributed into the search space in a more uniform manner at the initialization phase. The technique is applied to the Gravitational Search Algorithm and compared via the performance on benchmark function solutions.

  5. Bio-inspired online variable recruitment control of fluidic artificial muscles

    Science.gov (United States)

    Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew

    2016-12-01

    This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.

  6. A Netnographic Study of Entrepreneurial Traits: Evaluating classic typologies using the crowdsourcing algorithm of an online community

    Directory of Open Access Journals (Sweden)

    Marcos Cerqueira Lima

    2014-09-01

    Full Text Available This paper evaluates how the advices of experienced entrepreneurs to young start-up creators in an online community reflect entrepreneurship traits commonly found in conceptual typologies. The overall goal is to contrast and evaluate existing models based on evidence from an online community. This should facilitate future studies to improve current typologies by ranking entrepreneurial traits according to perceived relevance. In order to achieve these objectives, we have conducted a “netnographic study” (i.e., the qualitative analysis of web-based content of 96 answers to the question “What is the best advice for a young, first-time startup CEO?” on Quora.com. Relying on Quora’s ranking algorithm (based on crowdsourcing of votes and community prestige, we focused on the top 50% of answers (which we shall call the “above Quora 50” category considered the most relevant by its 2000+ followers and 120,000+ viewers. We used Nvivo as a Qualitative Data Analysis Software to code all the entries into the literature categories. These codes were then later retrieved using matrix queries to compare the incidence of traits and the perceived relevance of answers. We found that among the 50% highest ranking answers on Quora, the following traits are perceived as the most important for young entrepreneurs to develop: management style, attitude in interpersonal relations, vision, self-concept, leadership style, marketing, market and customer knowledge, innovation, technical knowledge and skills, attitude to growth, ability to adapt, purpose and relations system. These results could lead to improving existing typologies and creating new models capable of better identifying people with the highest potential to succeed in new venture creation.

  7. Towards online iris and periocular recognition under relaxed imaging constraints.

    Science.gov (United States)

    Tan, Chun-Wei; Kumar, Ajay

    2013-10-01

    Online iris recognition using distantly acquired images in a less imaging constrained environment requires the development of a efficient iris segmentation approach and recognition strategy that can exploit multiple features available for the potential identification. This paper presents an effective solution toward addressing such a problem. The developed iris segmentation approach exploits a random walker algorithm to efficiently estimate coarsely segmented iris images. These coarsely segmented iris images are postprocessed using a sequence of operations that can effectively improve the segmentation accuracy. The robustness of the proposed iris segmentation approach is ascertained by providing comparison with other state-of-the-art algorithms using publicly available UBIRIS.v2, FRGC, and CASIA.v4-distance databases. Our experimental results achieve improvement of 9.5%, 4.3%, and 25.7% in the average segmentation accuracy, respectively, for the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with most competing approaches. We also exploit the simultaneously extracted periocular features to achieve significant performance improvement. The joint segmentation and combination strategy suggest promising results and achieve average improvement of 132.3%, 7.45%, and 17.5% in the recognition performance, respectively, from the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with the related competing approaches.

  8. Internet-Based Implementation of Non-Pharmacological Interventions of the "People Getting a Grip on Arthritis" Educational Program: An International Online Knowledge Translation Randomized Controlled Trial Design Protocol

    Science.gov (United States)

    Thomas, Roanne; De Angelis, Gino

    2015-01-01

    Background Rheumatoid arthritis (RA) affects 2.1% of the Australian population (1.5% males; 2.6% females), with the highest prevalence from ages 55 to over 75 years (4.4-6.1%). In Canada, RA affects approximately 0.9% of adults, and within 30 years that is expected to increase to 1.3%. With an aging population and a greater number of individuals with modifiable risk factors for chronic diseases, such as arthritis, there is an urgent need for co-care management of arthritic conditions. The increasing trend and present shifts in the health services and policy sectors suggest that digital information delivery is becoming more prominent. Therefore, it is necessary to further investigate the use of online resources for RA information delivery. Objective The objective is to examine the effect of implementing an online program provided to patients with RA, the People Getting a Grip on Arthritis for RA (PGrip-RA) program, using information communication technologies (ie, Facebook and emails) in combination with arthritis health care professional support and electronic educational pamphlets. We believe this can serve as a useful and economical method of knowledge translation (KT). Methods This KT randomized controlled trial will use a prospective randomized open-label blinded-endpoint design to compare four different intervention approaches of the PGrip-RA program to a control group receiving general electronic educational pamphlets self-management in RA via email. Depending on group allocation, links to the Arthritis Society PGrip-RA material will be provided either through Facebook or by email. One group will receive feedback online from trained health care professionals. The intervention period is 6 weeks. Participants will have access to the Internet-based material after the completion of the baseline questionnaires until the final follow-up questionnaire at 6 months. We will invite 396 patients from Canadian and Australian Arthritis Consumers’ Associations to

  9. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  10. Moving beyond Bricks and Mortar: Changing the Conversation on Online Education

    Science.gov (United States)

    Miller, Teresa; Ribble, Michael

    2010-01-01

    Online learning has changed education in many ways. This change was not mandated, but instead filled a need expressed by students. A problem with this shift toward online teaching is that it has happened randomly and irregularly within K-12 systems. Demands from students for online learning at both K-12 and higher education levels have not always…

  11. A Lightweight Surface Reconstruction Method for Online 3D Scanning Point Cloud Data Oriented toward 3D Printing

    Directory of Open Access Journals (Sweden)

    Buyun Sheng

    2018-01-01

    Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.

  12. Selective epidemic vaccination under the performant routing algorithms

    Science.gov (United States)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  13. Resistance to erythropoiesis stimulating agents in patients treated with online hemodiafiltration and ultrapure low-flux hemodialysis: results from a randomized controlled trial (CONTRAST.

    Directory of Open Access Journals (Sweden)

    Neelke C van der Weerd

    Full Text Available Resistance to erythropoiesis stimulating agents (ESA is common in patients undergoing chronic hemodialysis (HD treatment. ESA responsiveness might be improved by enhanced clearance of uremic toxins of middle molecular weight, as can be obtained by hemodiafiltration (HDF. In this analysis of the randomized controlled CONvective TRAnsport STudy (CONTRAST; NCT00205556, the effect of online HDF on ESA resistance and iron parameters was studied. This was a pre-specified secondary endpoint of the main trial. A 12 months' analysis of 714 patients randomized to either treatment with online post-dilution HDF or continuation of low-flux HD was performed. Both groups were treated with ultrapure dialysis fluids. ESA resistance, measured every three months, was expressed as the ESA index (weight adjusted weekly ESA dose in daily defined doses [DDD]/hematocrit. The mean ESA index during 12 months was not different between patients treated with HDF or HD (mean difference HDF versus HD over time 0.029 DDD/kg/Hct/week [-0.024 to 0.081]; P = 0.29. Mean transferrin saturation ratio and ferritin levels during the study tended to be lower in patients treated with HDF (-2.52% [-4.72 to -0.31]; P = 0.02 and -49 ng/mL [-103 to 4]; P = 0.06 respectively, although there was a trend for those patients to receive slightly more iron supplementation (7.1 mg/week [-0.4 to 14.5]; P = 0.06. In conclusion, compared to low-flux HD with ultrapure dialysis fluid, treatment with online HDF did not result in a decrease in ESA resistance.ClinicalTrials.gov NCT00205556.

  14. Subjective randomness as statistical inference.

    Science.gov (United States)

    Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B

    2018-06-01

    Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  16. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  17. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  18. Online software trigger at PANDA/FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Donghee; Kliemt, Ralf; Nerling, Frank [Helmholtz-Institut Mainz (Germany); Denig, Achim [Institut fuer Kernphysik, Universitaet Mainz (Germany); Goetzen, Klaus; Peters, Klaus [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA experiment at FAIR will employ a novel trigger-less read-out system. Since a conventional hardware trigger concept is not suitable for PANDA, a high level online event filter will be applied to perform fast event selection based on physics properties of the reconstructed events. A trigger-less data stream implies an event selection with track reconstruction and pattern recognition to be performed online, and thus analysing data under real time conditions at event rates of up to 40 MHz.The projected data rate reduction of about three orders of magnitude requires an effective background rejection, while retaining interesting signal events. Real time event selection in the environment of hadronic reactions is rather challenging and relies on sophisticated algorithms for the software trigger. The implementation and the performance of physics trigger algorithms presently studied with realistic Monte Carlo simulations is discussed. The impact of parameters such as momentum or mass resolution, PID probability, vertex reconstruction and a multivariate analysis using the TMVA package for event filtering is presented.

  19. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  20. EFFECTIVENESS OF MULTIMEDIA IN LEARNING & TEACHING DATA STRUCTURES ONLINE

    Directory of Open Access Journals (Sweden)

    Sahalu JUNAIDU

    2008-10-01

    Full Text Available Online electronic education is now being widely accepted as a major viable component of higher education. This is fuelled by the emergence of worldwide information and computer communications technologies. However, online education is not being adopted in science and engineering subjects as widely as in other fields because of the idiosyncrasies of some science and engineering-based courses. For online engineering education to be broadly accepted and utilized, the quality of online courses must, amongst other things, be comparable to or better than those of traditional face-to-face classroom education. This paper explores and reports on the importance of creating multimedia-rich course content and the important role that animations can play in creating a successful online learning experience. Results of our study on an online data structures course over five years offerings show that students consistently perform much better in questions requiring application of material taught in carefully animated algorithms. These results should carry over to other educational environments.

  1. The on-line asymmetric traveling salesman problem

    NARCIS (Netherlands)

    Ausiello, G.; Bonifaci, V.; Laura, L.

    2008-01-01

    We consider two on-line versions of the asymmetric traveling salesman problem with triangle inequality. For the homing version, in which the salesman is required to return in the city where it started from, we give a -competitive algorithm and prove that this is best possible. For the nomadic

  2. GBA manager: an online tool for querying low-complexity regions in proteins.

    Science.gov (United States)

    Bandyopadhyay, Nirmalya; Kahveci, Tamer

    2010-01-01

    Abstract We developed GBA Manager, an online software that facilitates the Graph-Based Algorithm (GBA) we proposed in our earlier work. GBA identifies the low-complexity regions (LCR) of protein sequences. GBA exploits a similarity matrix, such as BLOSUM62, to compute the complexity of the subsequences of the input protein sequence. It uses a graph-based algorithm to accurately compute the regions that have low complexities. GBA Manager is a user friendly web-service that enables online querying of protein sequences using GBA. In addition to querying capabilities of the existing GBA algorithm, GBA Manager computes the p-values of the LCR identified. The p-value gives an estimate of the possibility that the region appears by chance. GBA Manager presents the output in three different understandable formats. GBA Manager is freely accessible at http://bioinformatics.cise.ufl.edu/GBA/GBA.htm .

  3. PSO-RBF Neural Network PID Control Algorithm of Electric Gas Pressure Regulator

    Directory of Open Access Journals (Sweden)

    Yuanchang Zhong

    2014-01-01

    Full Text Available The current electric gas pressure regulator often adopts the conventional PID control algorithm to take drive control of the core part (micromotor of electric gas pressure regulator. In order to further improve tracking performance and to shorten response time, this paper presents an improved PID intelligent control algorithm which applies to the electric gas pressure regulator. The algorithm uses the improved RBF neural network based on PSO algorithm to make online adjustment on PID parameters. Theoretical analysis and simulation result show that the algorithm shortens the step response time and improves tracking performance.

  4. The effectiveness of online cognitive behavioral treatment in routine clinical practice

    NARCIS (Netherlands)

    Ruwaard, Jeroen; Lange, Alfred; Schrieken, Bart; Dolan, Conor V; Emmelkamp, Paul

    2012-01-01

    CONTEXT: Randomized controlled trails have identified online cognitive behavioral therapy as an efficacious intervention in the management of common mental health disorders. OBJECTIVE: To assess the effectiveness of online CBT for different mental disorders in routine clinical practice. DESIGN: An

  5. The effectiveness of online cognitive behavioral treatment in routine clinical practice

    NARCIS (Netherlands)

    Ruwaard, J.; Lange, A.; Schrieken, B.; Dolan, C.V.; Emmelkamp, P.

    2012-01-01

    Context Randomized controlled trails have identified online cognitive behavioral therapy as an efficacious intervention in the management of common mental health disorders. Objective To assess the effectiveness of online CBT for different mental disorders in routine clinical practice. Design An

  6. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    Science.gov (United States)

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  7. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2017-06-01

    Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  8. Online prediction and control in nonlinear stochastic systems

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov

    2002-01-01

    speed and the relationship between (primarily) wind speed and wind power (the power curve). In paper G the model parameters are estimated using a RLS algorithm and any systematic time-variation of the model parameters is disregarded. Two di erent parameterizations of the power curve is considered...... are estimated using the algorithm proposed in paper C. The power curve and the diurnal variation of wind speed is estimated separately using the local polynomial regression procedure described in paper A . In paper J the parameters of the prediction model is assumed to be smooth functions of wind direction (and......The present thesis consists of a summary report and ten research papers. The subject of the thesis is on-line prediction and control of non-linear and non-stationary systems based on stochastic modelling. The thesis consists of three parts where the rst part deals with on-line estimation in linear...

  9. The effectiveness and feasibility of an online educational program for improving evidence-based practice literacy: an exploratory randomized study of US chiropractors.

    Science.gov (United States)

    Schneider, Michael; Evans, Roni; Haas, Mitchell; Leach, Matthew; Delagran, Louise; Hawk, Cheryl; Long, Cynthia; Cramer, Gregory D; Walters, Oakland; Vihstadt, Corrie; Terhorst, Lauren

    2016-01-01

    Online education programs are becoming a popular means to disseminate knowledge about evidence-based practice (EBP) among healthcare practitioners. This mode of delivery also offers a viable and potentially sustainable solution for teaching consistent EBP content to learners over time and across multiple geographical locations. We conducted a study with 3 main aims: 1) develop an online distance-learning program about the principles of evidence-based practice (EBP) for chiropractic providers; 2) test the effectiveness of the online program on the attitudes, skills, and use of EBP in a sample of chiropractors; and 3) determine the feasibility of expanding the program for broader-scale implementation. This study was conducted from January 2013 to September 2014. This was an exploratory randomized trial in which 293 chiropractors were allocated to either an online EBP education intervention or a waitlist control. The online EBP program consisted of 3 courses and 4 booster lessons, and was developed using educational resources created in previous EBP educational programs at 4 chiropractic institutions. Participants were surveyed using a validated EBP instrument (EBASE) with 3 rescaled (0 to 100) subscores: Attitudes, Skills, and Use of EBP. Multiple regression was used to compare groups, adjusting for personal and practice characteristics. Satisfaction and compliance with the program was evaluated to assess feasibility. The Training Group showed modest improvement compared to the Waitlist Group in attitudes (Δ =6.2, p < .001) and skills (Δ =10.0, p < .001) subscores, but not the use subscore (Δ = -2.3, p = .470). The majority of participants agreed that the educational program was 'relevant to their profession' (84 %) and 'was worthwhile' (82 %). Overall, engagement in the online program was less than optimal, with 48 % of the Training Group, and 42 % of the Waitlist Group completing all 3 of the program courses. Online EBP training leads to

  10. Online MOS Capacitor Characterization in LabVIEW Environment

    Directory of Open Access Journals (Sweden)

    Chinmay K Maiti

    2009-08-01

    Full Text Available We present an automated evaluation procedure to characterize MOS capacitors involving high-k gate dielectrics. Suitability of LabVIEW environment for online web-based semiconductor device characterization is demonstrated. Developed algorithms have been successfully applied to automate the MOS capacitor measurements for Capacitance-Voltage, Conductance-Voltage and Current-Voltage characteristics. Implementation of the algorithm for use as a remote internet-based characterization tool where the client and server communicate with each other via web services is also shown.

  11. A distributed scheduling algorithm for heterogeneous real-time systems

    Science.gov (United States)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  12. On-line supercapacitor dynamic models for energy conversion and management

    International Nuclear Information System (INIS)

    Wu, C.H.; Hung, Y.H.; Hong, C.W.

    2012-01-01

    Highlights: ► On-line supercapacitor dynamic models are derived from time and frequency domains. ► Equivalent circuits with an ANN identifier are derived for nonlinear effects. ► Nonlinear effects include environmental temperature and operating voltage. ► Supercapacitor models can achieve both system fidelity and computation efficiency. - Abstract: This paper develops on-line nonlinear dynamic models of electrochemical supercapacitors which are for energy conversion and management. Based on the theory of electrochemical impedance spectroscopy, extensive alternative current impedance tests have been conducted to investigate the frequency-domain dynamics of these supercapacitors. A Nyquist diagram is plotted to help establish an equivalent electric circuit, which is regarded as the first-phase linear model. Two performance-influencing factors, environmental temperature and operating voltage, are considered as nonlinear effects. The nonlinear relationships among parameters of the capacitances and resistances in the first-phase model are established by a multi-layer artificial neural network. The neural parameters are trained using a back-propagation algorithm by feeding the experimental data bank. Combining the first-phase model and the on-line neural “parameter identifier”, the algorithm produces an on-line nonlinear dynamic model. Simulation results have proved that this proposed model is able to achieve both system fidelity and computational efficiency.

  13. Online scheduling of jobs with fixed start times on related machines

    Czech Academy of Sciences Publication Activity Database

    Epstein, L.; Jeż, Łukasz; Sgall, J.; van Stee, R.

    2016-01-01

    Roč. 74, č. 1 (2016), s. 156-176 ISSN 0178-4617 R&D Projects: GA AV ČR IAA100190902; GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : online scheduling * online algorithms * related machines Subject RIV: BA - General Mathematics Impact factor: 0.735, year: 2016 http://link.springer.com/article/10.1007%2Fs00453-014-9940-2

  14. A not quite random walk: Experimenting with the ethnomethods of the algorithm

    Directory of Open Access Journals (Sweden)

    Malte Ziewitz

    2017-11-01

    Full Text Available Algorithms have become a widespread trope for making sense of social life. Science, finance, journalism, warfare, and policing—there is hardly anything these days that has not been specified as “algorithmic.” Yet, although the trope has brought together a variety of audiences, it is not quite clear what kind of work it does. Often portrayed as powerful yet inscrutable entities, algorithms maintain an air of mystery that makes them both interesting and difficult to understand. This article takes on this problem and examines the role of algorithms not as techno-scientific objects to be known, but as a figure that is used for making sense of observations. Following in the footsteps of Harold Garfinkel’s tutorial cases, I shall illustrate the implications of this view through an experiment with algorithmic navigation. Challenging participants to go on a walk, guided not by maps or GPS but by an algorithm developed on the spot, I highlight a number of dynamics typical of reasoning with running code, including the ongoing respecification of rules and observations, the stickiness of the procedure, and the selective invocation of the algorithm as an intelligible object. The materials thus provide an opportunity to rethink key issues at the intersection of the social sciences and the computational, including popular concerns with transparency, accountability, and ethics.

  15. A Partial Backlogging Inventory Model for Deteriorating Item under Fuzzy Inflation and Discounting over Random Planning Horizon: A Fuzzy Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Dipak Kumar Jana

    2013-01-01

    Full Text Available An inventory model for deteriorating item is considered in a random planning horizon under inflation and time value money. The model is described in two different environments: random and fuzzy random. The proposed model allows stock-dependent consumption rate and shortages with partial backlogging. In the fuzzy stochastic model, possibility chance constraints are used for defuzzification of imprecise expected total profit. Finally, genetic algorithm (GA and fuzzy simulation-based genetic algorithm (FSGA are used to make decisions for the above inventory models. The models are illustrated with some numerical data. Sensitivity analysis on expected profit function is also presented. Scope and Purpose. The traditional inventory model considers the ideal case in which depletion of inventory is caused by a constant demand rate. However, to keep sales higher, the inventory level would need to remain high. Of course, this would also result in higher holding or procurement cost. Also, in many real situations, during a longer-shortage period some of the customers may refuse the management. For instance, for fashionable commodities and high-tech products with short product life cycle, the willingness for a customer to wait for backlogging is diminishing with the length of the waiting time. Most of the classical inventory models did not take into account the effects of inflation and time value of money. But in the past, the economic situation of most of the countries has changed to such an extent due to large-scale inflation and consequent sharp decline in the purchasing power of money. So, it has not been possible to ignore the effects of inflation and time value of money any more. The purpose of this paper is to maximize the expected profit in the random planning horizon.

  16. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  17. A Random-Dot Kinematogram for Web-Based Vision Research

    Directory of Open Access Journals (Sweden)

    Sivananda Rajananda

    2018-01-01

    Full Text Available Web-based experiments using visual stimuli have become increasingly common in recent years, but many frequently-used stimuli in vision research have yet to be developed for online platforms. Here, we introduce the first open access random-dot kinematogram (RDK for use in web browsers. This fully customizable RDK offers options to implement several different types of noise (random position, random walk, random direction and parameters to control aperture shape, coherence level, the number of dots, and other features. We include links to commented JavaScript code for easy implementation in web-based experiments, as well as an example of how this stimulus can be integrated as a plugin with a JavaScript library for online studies (jsPsych.

  18. Biased random key genetic algorithm with insertion and gender selection for capacitated vehicle routing problem with time windows

    Science.gov (United States)

    Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri

    2017-06-01

    Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.

  19. Fractal Landscape Algorithms for Environmental Simulations

    Science.gov (United States)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  20. Predicting Online Purchasing Behavior

    OpenAIRE

    W.R BUCKINX; D. VAN DEN POEL

    2003-01-01

    This empirical study investigates the contribution of different types of predictors to the purchasing behaviour at an online store. We use logit modelling to predict whether or not a purchase is made during the next visit to the website using both forward and backward variable-selection techniques, as well as Furnival and Wilson’s global score search algorithm to find the best subset of predictors. We contribute to the literature by using variables from four different categories in predicting...

  1. OKVAR-Boost: a novel boosting algorithm to infer nonlinear dynamics and interactions in gene regulatory networks.

    Science.gov (United States)

    Lim, Néhémy; Senbabaoglu, Yasin; Michailidis, George; d'Alché-Buc, Florence

    2013-06-01

    Reverse engineering of gene regulatory networks remains a central challenge in computational systems biology, despite recent advances facilitated by benchmark in silico challenges that have aided in calibrating their performance. A number of approaches using either perturbation (knock-out) or wild-type time-series data have appeared in the literature addressing this problem, with the latter using linear temporal models. Nonlinear dynamical models are particularly appropriate for this inference task, given the generation mechanism of the time-series data. In this study, we introduce a novel nonlinear autoregressive model based on operator-valued kernels that simultaneously learns the model parameters, as well as the network structure. A flexible boosting algorithm (OKVAR-Boost) that shares features from L2-boosting and randomization-based algorithms is developed to perform the tasks of parameter learning and network inference for the proposed model. Specifically, at each boosting iteration, a regularized Operator-valued Kernel-based Vector AutoRegressive model (OKVAR) is trained on a random subnetwork. The final model consists of an ensemble of such models. The empirical estimation of the ensemble model's Jacobian matrix provides an estimation of the network structure. The performance of the proposed algorithm is first evaluated on a number of benchmark datasets from the DREAM3 challenge and then on real datasets related to the In vivo Reverse-Engineering and Modeling Assessment (IRMA) and T-cell networks. The high-quality results obtained strongly indicate that it outperforms existing approaches. The OKVAR-Boost Matlab code is available as the archive: http://amis-group.fr/sourcecode-okvar-boost/OKVARBoost-v1.0.zip. Supplementary data are available at Bioinformatics online.

  2. The eCALM Trial-eTherapy for cancer appLying mindfulness: online mindfulness-based cancer recovery program for underserved individuals living with cancer in Alberta: protocol development for a randomized wait-list controlled clinical trial

    Directory of Open Access Journals (Sweden)

    Zernicke Kristin A

    2013-02-01

    Full Text Available Abstract Background Elevated stress can exacerbate cancer symptom severity, and after completion of primary cancer treatments, many individuals continue to have significant distress. Mindfulness-Based Cancer Recovery (MBCR is an 8-week group psychosocial intervention consisting of training in mindfulness meditation and yoga designed to mitigate stress, pain, and chronic illness. Efficacy research shows face-to-face (F2F MBCR programs have positive benefits for cancer patients; however barriers exist that impede participation in F2F groups. While online MBCR groups are available to the public, none have been evaluated. Primary objective: determine whether underserved patients are willing to participate in and complete an online MBCR program. Secondary objectives: determine whether online MBCR will mirror previous efficacy findings from F2F MBCR groups on patient-reported outcomes. Method/design The study includes cancer patients in Alberta, exhibiting moderate distress, who do not have access to F2F MBCR. Participants will be randomized to either online MBCR, or waiting for the next available group. An anticipated sample size of 64 participants will complete measures online pre and post treatment or waiting period. Feasibility will be tracked through monitoring numbers eligible and participating through each stage of the protocol. Discussion 47 have completed/completing the intervention. Data suggest it is possible to conduct a randomized waitlist controlled trial of online MBCR to reach underserved cancer survivors. Trial registration Clinical Trials.gov Identifier: NCT01476891

  3. Identifying online user reputation in terms of user preference

    Science.gov (United States)

    Dai, Lu; Guo, Qiang; Liu, Xiao-Lu; Liu, Jian-Guo; Zhang, Yi-Cheng

    2018-03-01

    Identifying online user reputation is significant for online social systems. In this paper, taking into account the preference physics of online user collective behaviors, we present an improved group-based rating method for ranking online user reputation based on the user preference (PGR). All the ratings given by each specific user are mapped to the same rating criteria. By grouping users according to their mapped ratings, the online user reputation is calculated based on the corresponding group sizes. Results for MovieLens and Netflix data sets show that the AUC values of the PGR method can reach 0.9842 (0.9493) and 0.9995 (0.9987) for malicious (random) spammers, respectively, outperforming the results generated by the traditional group-based method, which indicates that the online preference plays an important role for measuring user reputation.

  4. Online data quality monitoring system at BES Ⅲ

    International Nuclear Information System (INIS)

    Sun Xiaodong; Hu Jifeng; Zhao Haisheng; Ji Xiaobin; Wang Yifang; Liu Beijiang; Zheng Yangheng

    2012-01-01

    The online Data Quality Monitoring (DQM) tool plays an important role in the data recording process of HEP experiments. The BES Ⅲ DQM collects data from the online data flow, reconstructs them with offline reconstruction software and automatically analyzes the reconstructed data with user-defined algorithms. The DQM software is a scalable distributed system. The monitored results are gathered and displayed in various formats, which provides the shifter with current run information that can be used to identify problems quickly. This paper gives an overview of the DQM system at BES Ⅲ. (authors)

  5. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  6. Online Artifact Removal for Brain-Computer Interfaces Using Support Vector Machines and Blind Source Separation

    Directory of Open Access Journals (Sweden)

    Sebastian Halder

    2007-01-01

    that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA and one BSS algorithm (AMUSE are evaluated to determine their ability to isolate electromyographic (EMG and electrooculographic (EOG artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.

  7. On-line compression of symmetrical multidimensional γ-ray spectra using adaptive orthogonal transforms

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2008-01-01

    The efficient algorithm to compress multidimensional symmetrical γ-ray events is presented. The reduction of data volume can be achieved due to both the symmetry of the γ-ray spectra and compression capabilities of the employed adaptive orthogonal transform. Illustrative examples prove in the favor of the proposed compression algorithm. The algorithm was implemented for on-line compression of events. Acquired compressed data can be later processed in an interactive way

  8. Online Speed Scaling Based on Active Job Count to Minimize Flow Plus Energy

    DEFF Research Database (Denmark)

    Lam, Tak-Wah; Lee, Lap Kei; To, Isaac K. K.

    2013-01-01

    This paper is concerned with online scheduling algorithms that aim at minimizing the total flow time plus energy usage. The results are divided into two parts. First, we consider the well-studied “simple” speed scaling model and show how to analyze a speed scaling algorithm (called AJC) that chan...

  9. The online data filters for Explorer and Nautilus

    International Nuclear Information System (INIS)

    D'Antonio, S

    2002-01-01

    The basic problem for gravitational wave detectors is to detect very small signals in the presence of noise which is often not Gaussian and not stationary. We cope with this problem by applying data filters, matched to short bursts, based on power spectra obtained online. We describe the new procedure adopted by the Rome group, where the signal-to-noise ratios obtained with various algorithms are compared and the best one is selected. The selected algorithm depends on the noise characteristics at that particular time

  10. On-line monitoring the extract process of Fu-fang Shuanghua oral solution using near infrared spectroscopy and different PLS algorithms

    Science.gov (United States)

    Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing

    2016-01-01

    An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.

  11. Hidden Markov Model Application to Transfer The Trader Online Forex Brokers

    Directory of Open Access Journals (Sweden)

    Farida Suharleni

    2012-05-01

    Full Text Available Hidden Markov Model is elaboration of Markov chain, which is applicable to cases that can’t directly observe. In this research, Hidden Markov Model is used to know trader’s transition to broker forex online. In Hidden Markov Model, observed state is observable part and hidden state is hidden part. Hidden Markov Model allows modeling system that contains interrelated observed state and hidden state. As observed state in trader’s transition to broker forex online is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online, whereas as hidden state is broker forex online Marketiva, Masterforex, Instaforex, FBS and Others. First step on application of Hidden Markov Model in this research is making construction model by making a probability of transition matrix (A from every broker forex online. Next step is making a probability of observation matrix (B by making conditional probability of five categories, that is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online and also need to determine an initial state probability (π from every broker forex online. The last step is using Viterbi algorithm to find hidden state sequences that is broker forex online sequences which is the most possible based on model and observed state that is the five categories. Application of Hidden Markov Model is done by making program with Viterbi algorithm using Delphi 7.0 software with observed state based on simulation data. Example: By the number of observation T = 5 and observed state sequences O = (2,4,3,5,1 is found hidden state sequences which the most possible with observed state O as following : where X1 = FBS, X2 = Masterforex, X3 = Marketiva, X4 = Others, and X5 = Instaforex.

  12. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  13. Good control practices underlined by an on-line fuzzy control database

    Directory of Open Access Journals (Sweden)

    Alonso, M. V.

    1994-04-01

    Full Text Available In the olive oil trade, control systems that automate extraction processes, cutting production costs and increasing processing capacity without losing quality, are always desirable. The database structure of an on-line fuzzy control of centrifugation systems and the algorithms used to attain the best control conditions are analysed. Good control practices are suggested to obtain virgin olive oil of prime quality.

    In the olive oil trade, control systems that automate extraction processes, cutting production costs and increasing processing capacity without losing quality, are always desirable. The database structure of an on-line fuzzy control of centrifugation systems and the algorithms used to attain the best control conditions are analysed. Good control practices are suggested to obtain virgin olive oil of prime quality.

  14. Online Particle Detection by Neural Networks Based on Topologic Calorimetry Information

    CERN Document Server

    Ciodaro, T; The ATLAS collaboration; Damazio, D; de Seixas, JM

    2011-01-01

    This paper presents the last results from the Ringer algorithm, which is based on artificial neural networks for the electron identification at the online filtering system of the ATLAS particle detector, in the context of the LHC experiment at CERN. The algorithm performs topological feature extraction over the ATLAS calorimetry information (energy measurements). Later, the extracted information is presented to a neural network classifier. Studies showed that the Ringer algorithm achieves high detection efficiency, while keeping the false alarm rate low. Optimizations, guided by detailed analysis, reduced the algorithm execution time in 59%. Also, the payload necessary to store the Ringer algorithm information represents less than 6.2 percent of the total filtering system amount

  15. Evaluating progressive-rendering algorithms in appearance design tasks.

    Science.gov (United States)

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  16. Teaching in Cyberspace: Online versus Traditional Instruction Using a Waiting-List Experimental Design

    Science.gov (United States)

    Poirier, Christopher R.; Feldman, Robert S.

    2004-01-01

    To test the effectiveness of an online introductory psychology course, we randomly assigned students to a large, traditional course or to an online course from a population of students who indicated that either course type was acceptable using a "waiting list" experimental design. Students in the online course performed better on exams and equally…

  17. A homology sound-based algorithm for speech signal interference

    Science.gov (United States)

    Jiang, Yi-jiao; Chen, Hou-jin; Li, Ju-peng; Zhang, Zhan-song

    2015-12-01

    Aiming at secure analog speech communication, a homology sound-based algorithm for speech signal interference is proposed in this paper. We first split speech signal into phonetic fragments by a short-term energy method and establish an interference noise cache library with the phonetic fragments. Then we implement the homology sound interference by mixing the randomly selected interferential fragments and the original speech in real time. The computer simulation results indicated that the interference produced by this algorithm has advantages of real time, randomness, and high correlation with the original signal, comparing with the traditional noise interference methods such as white noise interference. After further studies, the proposed algorithm may be readily used in secure speech communication.

  18. Support or competition? How online social networks increase physical activity: A randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Jingwen Zhang, PhD

    2016-12-01

    Full Text Available To identify what features of online social networks can increase physical activity, we conducted a 4-arm randomized controlled trial in 2014 in Philadelphia, PA. Students (n = 790, mean age = 25.2 at an university were randomly assigned to one of four conditions composed of either supportive or competitive relationships and either with individual or team incentives for attending exercise classes. The social comparison condition placed participants into 6-person competitive networks with individual incentives. The social support condition placed participants into 6-person teams with team incentives. The combined condition with both supportive and competitive relationships placed participants into 6-person teams, where participants could compare their team's performance to 5 other teams' performances. The control condition only allowed participants to attend classes with individual incentives. Rewards were based on the total number of classes attended by an individual, or the average number of classes attended by the members of a team. The outcome was the number of classes that participants attended. Data were analyzed using multilevel models in 2014. The mean attendance numbers per week were 35.7, 38.5, 20.3, and 16.8 in the social comparison, the combined, the control, and the social support conditions. Attendance numbers were 90% higher in the social comparison and the combined conditions (mean = 1.9, SE = 0.2 in contrast to the two conditions without comparison (mean = 1.0, SE = 0.2 (p = 0.003. Social comparison was more effective for increasing physical activity than social support and its effects did not depend on individual or team incentives.

  19. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering

    Directory of Open Access Journals (Sweden)

    Oliynyk Andriy

    2012-08-01

    Full Text Available Abstract Background Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Results Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting, which is designed to optimize: (i fast and accurate detection, (ii offline sorting and (iii online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com using LabVIEW (National Instruments, USA. We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is

  20. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering.

    Science.gov (United States)

    Oliynyk, Andriy; Bonifazzi, Claudio; Montani, Fernando; Fadiga, Luciano

    2012-08-08

    Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike

  1. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  2. Improved Collaborative Filtering Algorithm using Topic Model

    Directory of Open Access Journals (Sweden)

    Liu Na

    2016-01-01

    Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.

  3. Simulation of random walks in field theory

    International Nuclear Information System (INIS)

    Rensburg, E.J.J. van

    1988-01-01

    The numerical simulation of random walks is considered using the Monte Carlo method previously proposed. The algorithm is tested and then generalised to generate Edwards random walks. The renormalised masses of the Edwards model are calculated and the results are compared with those obtained from a simple perturbation theory calculation for small values of the bare coupling constant. The efficiency of this algorithm is discussed and compared with an alternative approach. (author)

  4. Automatic gender determination from 3D digital maxillary tooth plaster models based on the random forest algorithm and discrete cosine transform.

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet; Kök, Hatice

    2017-05-01

    One of the first stages in the identification of an individual is gender determination. Through gender determination, the search spectrum can be reduced. In disasters such as accidents or fires, which can render identification somewhat difficult, durable teeth are an important source for identification. This study proposes a smart system that can automatically determine gender using 3D digital maxillary tooth plaster models. The study group was composed of 40 Turkish individuals (20 female, 20 male) between the ages of 21 and 24. Using the iterative closest point (ICP) algorithm, tooth models were aligned, and after the segmentation process, models were transformed into depth images. The local discrete cosine transform (DCT) was used in the process of feature extraction, and the random forest (RF) algorithm was used for the process of classification. Classification was performed using 30 different seeds for random generator values and 10-fold cross-validation. A value of 85.166% was obtained for average classification accuracy (CA) and a value of 91.75% for the area under the ROC curve (AUC). A multi-disciplinary study is performed here that includes computer sciences, medicine and dentistry. A smart system is proposed for the determination of gender from 3D digital models of maxillary tooth plaster models. This study has the capacity to extend the field of gender determination from teeth. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Lower Bounds and Semi On-line Multiprocessor Scheduling

    Directory of Open Access Journals (Sweden)

    T.C. Edwin Cheng

    2003-10-01

    Full Text Available We are given a set of identical machines and a sequence of jobs from which we know the sum of the job weights in advance. The jobs have to be assigned on-line to one of the machines and the objective is to minimize the makespan. An algorithm with performance ratio 1.6 and a lower bound of 1.5 is presented. This improves recent results by Azar and Regev who published an algorithm with performance ratio 1.625 for the less general problem that the optimal makespan is known in advance.

  6. Adaptive on-line calibration for around-view monitoring system using between-camera homography estimation

    Science.gov (United States)

    Lim, Sungsoo; Lee, Seohyung; Kim, Jun-geon; Lee, Daeho

    2018-01-01

    The around-view monitoring (AVM) system is one of the major applications of advanced driver assistance systems and intelligent transportation systems. We propose an on-line calibration method, which can compensate misalignments for AVM systems. Most AVM systems use fisheye undistortion, inverse perspective transformation, and geometrical registration methods. To perform these procedures, the parameters for each process must be known; the procedure by which the parameters are estimated is referred to as the initial calibration. However, when only using the initial calibration data, we cannot compensate misalignments, caused by changing equilibria of cars. Moreover, even small changes such as tire pressure levels, passenger weight, or road conditions can affect a car's equilibrium. Therefore, to compensate for this misalignment, additional techniques are necessary, specifically an on-line calibration method. On-line calibration can recalculate homographies, which can correct any degree of misalignment using the unique features of ordinary parking lanes. To extract features from the parking lanes, this method uses corner detection and a pattern matching algorithm. From the extracted features, homographies are estimated using random sample consensus and parameter estimation. Finally, the misaligned epipolar geographies are compensated via the estimated homographies. Thus, the proposed method can render image planes parallel to the ground. This method does not require any designated patterns and can be used whenever cars are placed in a parking lot. The experimental results show the robustness and efficiency of the method.

  7. Communication Pathways in the Light Water Reactor Sustainability Online Monitoring Project

    International Nuclear Information System (INIS)

    Lybeck, Nancy J.; Tawfik, Magdy S.; Pham, Binh T.; Agarwal, Vivek; Coble, Jamie

    2011-01-01

    Implementation of online monitoring and prognostics in existing U.S. nuclear power plants will involve coordinating the efforts of national laboratories, utilities, universities, and private companies. Large amounts of operational data, including failure data, are necessary for the development and calibration of diagnostic and prognostic algorithms. The ability to use data from all available resources will provide the most expeditious avenue to implementation of online monitoring in existing NPPs; however, operational plant data are often considered proprietary. Secure methods for transferring and storing data are discussed, along with a potential technology for implementation of online monitoring.

  8. The power of reordering for online minimum makespan scheduling

    OpenAIRE

    Englert, Matthias; Özmen, Deniz; Westermann, Matthias

    2014-01-01

    In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we do not require that each arriving job has to be assigned immediately to one of the machines. A reordering buffer with limited...

  9. 3D Multisource Full‐Waveform Inversion using Dynamic Random Phase Encoding

    KAUST Repository

    Boonyasiriwat, Chaiwoot

    2010-10-17

    We have developed a multisource full‐waveform inversion algorithm using a dynamic phase encoding strategy with dual‐randomization—both the position and polarity of simultaneous sources are randomized and changed every iteration. The dynamic dual‐randomization is used to promote the destructive interference of crosstalk noise resulting from blending a large number of common shot gathers into a supergather. We compare our multisource algorithm with various algorithms in a numerical experiment using the 3D SEG/EAGE overthrust model and show that our algorithm provides a higher‐quality velocity tomogram than the other methods that use only monorandomization. This suggests that increasing the degree of randomness in phase encoding should improve the quality of the inversion result.

  10. Social image of students who shop and don't shop online.

    Science.gov (United States)

    Lammers, H Bruce; Curren, Mary T; Cours, Deborah; Lammers, Marilyn L

    2003-06-01

    A descriptive survey of a stratified random sample of 326 undergraduates from a large, diverse university in Los Angeles was conducted to assess whether resistance to online shopping might be, in part, related to negative social perceptions of those who shop online. Indirect questioning showed that students perceived online student shoppers as more lazy and less likely to fear for the safety and security of others but also as more trustworthy, attractive, successful, and smart. Differences in social perceptions were not related to these students' own online spending.

  11. Augmenting Outpatient Alcohol Treatment as Usual With Online Alcohol Avoidance Training: Protocol for a Double-Blind Randomized Controlled Trial.

    Science.gov (United States)

    Bratti-van der Werf, Marleen Kj; Laurens, Melissa C; Postel, Marloes G; Pieterse, Marcel E; Ben Allouch, Somaya; Wiers, Reinout W; Bohlmeijer, Ernst T; Salemink, Elske

    2018-03-01

    Recent theoretical models emphasize the role of impulsive processes in alcohol addiction, which can be retrained with computerized Cognitive Bias Modification (CBM) training. In this study, the focus is on action tendencies that are activated relatively automatically. The aim of the study is to examine the effectiveness of online CBM Alcohol Avoidance Training using an adapted Approach-Avoidance Task as a supplement to treatment as usual (TAU) in an outpatient treatment setting. The effectiveness of 8 online sessions of CBM Alcohol Avoidance Training added to TAU is tested in a double-blind, randomized controlled trial with pre- and postassessments, plus follow-up assessments after 3 and 6 months. Participants are adult patients (age 18 years or over) currently following Web-based or face-to-face TAU to reduce or stop drinking. These patients are randomly assigned to a CBM Alcohol Avoidance or a placebo training. The primary outcome measure is a reduction in alcohol consumption. We hypothesize that TAU + CBM will result in up to a 13-percentage point incremental effect in the number of patients reaching the safe drinking guidelines compared to TAU + placebo CBM. Secondary outcome measures include an improvement in health status and a decrease in depression, anxiety, stress, and possible mediation by the change in approach bias. Finally, patients' adherence, acceptability, and credibility will be examined. The trial was funded in 2014 and is currently in the active participant recruitment phase (since May 2015). Enrolment will be completed in 2019. First results are expected to be submitted for publication in 2020. The main purpose of this study is to increase our knowledge about the added value of online Alcohol Avoidance Training as a supplement to TAU in an outpatient treatment setting. If the added effectiveness of the training is proven, the next step could be to incorporate the intervention into current treatment. Netherlands Trial Register NTR5087; http

  12. 5th Computer Science On-line Conference

    CERN Document Server

    Senkerik, Roman; Oplatkova, Zuzana; Silhavy, Petr; Prokopova, Zdenka

    2016-01-01

    This volume is based on the research papers presented in the 5th Computer Science On-line Conference. The volume Artificial Intelligence Perspectives in Intelligent Systems presents modern trends and methods to real-world problems, and in particular, exploratory research that describes novel approaches in the field of artificial intelligence. New algorithms in a variety of fields are also presented. The Computer Science On-line Conference (CSOC 2016) is intended to provide an international forum for discussions on the latest research results in all areas related to Computer Science. The addressed topics are the theoretical aspects and applications of Computer Science, Artificial Intelligences, Cybernetics, Automation Control Theory and Software Engineering.

  13. Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections....

  14. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise,up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. ...

  15. Online maintenance policy for a deteriorating system with random change of mode

    Energy Technology Data Exchange (ETDEWEB)

    Saassouh, B. [Laboratoire de Modelisation et Surete des Systemes, Institut Charles Delaunay-FRE CNRS 2848, Universite de Technologie de Troyes, 12, rue Marie Curie-BP 2060, 10010 Troyes Cedex (France); Dieulle, L. [Laboratoire de Modelisation et Surete des Systemes, Institut Charles Delaunay-FRE CNRS 2848, Universite de Technologie de Troyes, 12, rue Marie Curie-BP 2060, 10010 Troyes Cedex (France); Grall, A. [Laboratoire de Modelisation et Surete des Systemes, Institut Charles Delaunay-FRE CNRS 2848, Universite de Technologie de Troyes, 12, rue Marie Curie-BP 2060, 10010 Troyes Cedex (France)]. E-mail: antoine.grall@utt.fr

    2007-12-15

    Most of maintenance policies proposed in the literature for gradually deteriorating systems, consider a stationary deterioration process. This paper is an attempt to take into account stochastically deteriorating systems which are subject to a sudden change in their degradation process. A technical device subject to gradual degradation is considered. It is assumed that the level of degradation can be resumed by a single scalar variable. An online maintenance decision rule is proposed, which makes it possible to take into account in real time the online information available on the operating mode of the system as well as its actual deterioration level. We show the efficiency of considering online decision rules for maintenance with respect to traditional maintenance policies based on a static alarm threshold. Numerical simulations are given, to assess and optimize the performance of the maintained system from its asymptotic unavailability point of view. It is compared to the results obtained with classical control-limit maintenance policies.

  16. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    Science.gov (United States)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  17. A Model Predictive Algorithm for Active Control of Nonlinear Noise Processes

    Directory of Open Access Journals (Sweden)

    Qi-Zhi Zhang

    2005-01-01

    Full Text Available In this paper, an improved nonlinear Active Noise Control (ANC system is achieved by introducing an appropriate secondary source. For ANC system to be successfully implemented, the nonlinearity of the primary path and time delay of the secondary path must be overcome. A nonlinear Model Predictive Control (MPC strategy is introduced to deal with the time delay in the secondary path and the nonlinearity in the primary path of the ANC system. An overall online modeling technique is utilized for online secondary path and primary path estimation. The secondary path is estimated using an adaptive FIR filter, and the primary path is estimated using a Neural Network (NN. The two models are connected in parallel with the two paths. In this system, the mutual disturbances between the operation of the nonlinear ANC controller and modeling of the secondary can be greatly reduced. The coefficients of the adaptive FIR filter and weight vector of NN are adjusted online. Computer simulations are carried out to compare the proposed nonlinear MPC method with the nonlinear Filter-x Least Mean Square (FXLMS algorithm. The results showed that the convergence speed of the proposed nonlinear MPC algorithm is faster than that of nonlinear FXLMS algorithm. For testing the robust performance of the proposed nonlinear ANC system, the sudden changes in the secondary path and primary path of the ANC system are considered. Results indicated that the proposed nonlinear ANC system can rapidly track the sudden changes in the acoustic paths of the nonlinear ANC system, and ensure the adaptive algorithm stable when the nonlinear ANC system is time variable.

  18. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  19. Automatic classification of endogenous seismic sources within a landslide body using random forest algorithm

    Science.gov (United States)

    Provost, Floriane; Hibert, Clément; Malet, Jean-Philippe; Stumpf, André; Doubre, Cécile

    2016-04-01

    Different studies have shown the presence of microseismic activity in soft-rock landslides. The seismic signals exhibit significantly different features in the time and frequency domains which allow their classification and interpretation. Most of the classes could be associated with different mechanisms of deformation occurring within and at the surface (e.g. rockfall, slide-quake, fissure opening, fluid circulation). However, some signals remain not fully understood and some classes contain few examples that prevent any interpretation. To move toward a more complete interpretation of the links between the dynamics of soft-rock landslides and the physical processes controlling their behaviour, a complete catalog of the endogeneous seismicity is needed. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forest classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied to the seismicity recorded at the Super-Sauze landslide between 2013 and 2015. We selected a dozen of seismic signal features that characterize precisely its spectral content (e.g. central frequency, spectrum width, energy in several frequency bands, spectrogram shape, spectrum local and global maxima

  20. An Online Solution of LiDAR Scan Matching Aided Inertial Navigation System for Indoor Mobile Mapping

    Directory of Open Access Journals (Sweden)

    Xiaoji Niu

    2017-01-01

    Full Text Available Multisensors (LiDAR/IMU/CAMERA integrated Simultaneous Location and Mapping (SLAM technology for navigation and mobile mapping in a GNSS-denied environment, such as indoor areas, dense forests, or urban canyons, becomes a promising solution. An online (real-time version of such system can extremely extend its applications, especially for indoor mobile mapping. However, the real-time response issue of multisensors is a big challenge for an online SLAM system, due to the different sampling frequencies and processing time of different algorithms. In this paper, an online Extended Kalman Filter (EKF integrated algorithm of LiDAR scan matching and IMU mechanization for Unmanned Ground Vehicle (UGV indoor navigation system is introduced. Since LiDAR scan matching is considerably more time consuming than the IMU mechanism, the real-time synchronous issue is solved via a one-step-error-state-transition method in EKF. Stationary and dynamic field tests had been performed using a UGV platform along typical corridor of office building. Compared to the traditional sequential postprocessed EKF algorithm, the proposed method can significantly mitigate the time delay of navigation outputs under the premise of guaranteeing the positioning accuracy, which can be used as an online navigation solution for indoor mobile mapping.