WorldWideScience

Sample records for lempel-ziv incremental parsing

  1. On the Approximation Ratio of Lempel-Ziv Parsing

    DEFF Research Database (Denmark)

    Gagie, Travis; Navarro, Gonzalo; Prezza, Nicola

    2018-01-01

    in the text. Since computing b is NP-complete, a popular gold standard is z, the number of phrases in the Lempel-Ziv parse of the text, where phrases can be copied only from the left. While z can be computed in linear time, almost nothing has been known for decades about its approximation ratio with respect...

  2. Lempel-Ziv Compression in a Sliding Window

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Fischer, Johannes

    2017-01-01

    result, we combine a simple modification and augmentation of the suffix tree with periodicity properties of sliding windows. We also apply this new technique to obtain an algorithm for the approximate rightmost LZ77 problem that uses O(n(log z + loglogn)) time and O(n) space and produces a (1 + ϵ......We present new algorithms for the sliding window Lempel-Ziv (LZ77) problem and the approximate rightmost LZ77 parsing problem. Our main result is a new and surprisingly simple algorithm that computes the sliding window LZ77 parse in O(w) space and either O(n) expected time or O(n log log w + z log...

  3. Lempel-Ziv complexity analysis of one dimensional cellular automata.

    Science.gov (United States)

    Estevez-Rams, E; Lora-Serrano, R; Nunes, C A J; Aragón-Fernández, B

    2015-12-01

    Lempel-Ziv complexity measure has been used to estimate the entropy density of a string. It is defined as the number of factors in a production factorization of a string. In this contribution, we show that its use can be extended, by using the normalized information distance, to study the spatiotemporal evolution of random initial configurations under cellular automata rules. In particular, the transfer information from time consecutive configurations is studied, as well as the sensitivity to perturbed initial conditions. The behavior of the cellular automata rules can be grouped in different classes, but no single grouping captures the whole nature of the involved rules. The analysis carried out is particularly appropriate for studying the computational processing capabilities of cellular automata rules.

  4. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    Science.gov (United States)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  5. Correlation between detrended fluctuation analysis and the Lempel-Ziv complexity in nonlinear time series analysis

    International Nuclear Information System (INIS)

    Tang You-Fu; Liu Shu-Lin; Jiang Rui-Hong; Liu Ying-Hui

    2013-01-01

    We study the correlation between detrended fluctuation analysis (DFA) and the Lempel-Ziv complexity (LZC) in nonlinear time series analysis in this paper. Typical dynamic systems including a logistic map and a Duffing model are investigated. Moreover, the influence of Gaussian random noise on both the DFA and LZC are analyzed. The results show a high correlation between the DFA and LZC, which can quantify the non-stationarity and the nonlinearity of the time series, respectively. With the enhancement of the random component, the exponent a and the normalized complexity index C show increasing trends. In addition, C is found to be more sensitive to the fluctuation in the nonlinear time series than α. Finally, the correlation between the DFA and LZC is applied to the extraction of vibration signals for a reciprocating compressor gas valve, and an effective fault diagnosis result is obtained

  6. Time-space trade-offs for lempel-ziv compressed indexing

    DEFF Research Database (Denmark)

    Bille, Philip; Ettienne, Mikko Berggren; Gørtz, Inge Li

    2017-01-01

    Given a string S, the compressed indexing problem is to preprocess S into a compressed representation that supports fast substring queries. The goal is to use little space relative to the compressed size of S while supporting fast queries. We present a compressed index based on the Lempel-Ziv 1977...... compression scheme. Let n, and z denote the size of the input string, and the compressed LZ77 string, respectively. We obtain the following time-space trade-offs. Given a pattern string P of length m, we can solve the problem in (i) O (m + occ lg lg n) time using O(z lg(n/z) lg lg z) space, or (ii) (m (1...... best space bound, but has a leading term in the query time of O(m(1 + lgϵ z/lg(n/z))). However, for any polynomial compression ratio, i.e., z = O(n1-δ), for constant δ > 0, this becomes O(m). Our index also supports extraction of any substring of length ℓ in O(ℓ + lg(n/z)) time. Technically, our...

  7. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  8. Nonlinear complexity of random visibility graph and Lempel-Ziv on multitype range-intensity interacting financial dynamics

    Science.gov (United States)

    Zhang, Yali; Wang, Jun

    2017-09-01

    In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.

  9. A short note on the paper of Liu et al. (2012). A relative Lempel-Ziv complexity: Application to comparing biological sequences. Chemical Physics Letters, volume 530, 19 March 2012, pages 107-112

    Science.gov (United States)

    Arit, Turkan; Keskin, Burak; Firuzan, Esin; Cavas, Cagin Kandemir; Liu, Liwei; Cavas, Levent

    2018-04-01

    The report entitled "L. Liu, D. Li, F. Bai, A relative Lempel-Ziv complexity: Application to comparing biological sequences, Chem. Phys. Lett. 530 (2012) 107-112" mentions on the powerful construction of phylogenetic trees based on Lempel-Ziv algorithm. On the other hand, the method explained in the paper does not give promising result on the data set on invasive Caulerpa taxifolia in the Mediterranean Sea. The phylogenetic trees are obtained by the proposed method of the aforementioned paper in this short note.

  10. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  11. Dependency Parsing

    CERN Document Server

    Kubler, Sandra; Nivre, Joakim

    2009-01-01

    Dependency-based methods for syntactic parsing have become increasingly popular in natural language processing in recent years. This book gives a thorough introduction to the methods that are most widely used today. After an introduction to dependency grammar and dependency parsing, followed by a formal characterization of the dependency parsing problem, the book surveys the three major classes of parsing models that are in current use: transition-based, graph-based, and grammar-based models. It continues with a chapter on evaluation and one on the comparison of different methods, and it close

  12. Faster, Practical GLL Parsing

    NARCIS (Netherlands)

    A. Afroozeh (Ali); A. Izmaylova (Anastasia)

    2015-01-01

    htmlabstractGeneralized LL (GLL) parsing is an extension of recursive-descent (RD) parsing that supports all context-free grammars in cubic time and space. GLL parsers have the direct relationship with the grammar that RD parsers have, and therefore, compared to GLR, are easier to understand, debug,

  13. Unifying LL and LR parsing

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim)

    1993-01-01

    textabstractIn parsing theory, LL parsing and LR parsing are regarded to be two distinct methods. In this paper the relation between these methods is clarified.As shown in literature on parsing theory, for every context-free grammar, a so-called non-deterministic LR(0) automaton can be constructed.

  14. On Parsing CHILDES

    OpenAIRE

    Laakso, Aarre

    2005-01-01

    Research on child language acquisition would benefit from the availability of a large body of syntactically parsed utterances between parents and children. We consider the problem of generating such a ``treebank'' from the CHILDES corpus, which currently contains primarily orthographically transcribed speech tagged for lexical category.

  15. Faster scannerless GLR parsing

    NARCIS (Netherlands)

    Economopoulos, G.R.; Klint, P.; Vinju, J.J.; Moor, de O.; Schwartzbach, M.I.

    2009-01-01

    Analysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of tokenization based

  16. Faster Scannerless GLR parsing

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); G.R. Economopoulos (Giorgos Robert); P. Klint (Paul)

    2008-01-01

    textabstractAnalysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of

  17. Faster scannerless GLR parsing

    NARCIS (Netherlands)

    G.R. Economopoulos (Giorgos Robert); P. Klint (Paul); J.J. Vinju (Jurgen); O. de Moor; M.I. Schwartzbach

    2009-01-01

    textabstractAnalysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of

  18. Memory-Based Shallow Parsing

    NARCIS (Netherlands)

    Tjong Kim Sang, E.F.

    2002-01-01

    We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving

  19. Parse Journal #2: Introduction

    OpenAIRE

    Bowman, Jason; Malik, Suhail; Phillips, Andrea

    2015-01-01

    As a periodical concerned with the critical potential of artistic research, this edition of the PARSE journal mobilises the multiple perspectives of artists, thinkers, critics and curators on the problematics, discontents and possibilities of private capital as an unregulated yet assumptive producer of art’s value, including its integration with state-funding. We have put emphasis on how this conditioning of art’s production, circulation, reception and sale can be put to task. In particular, ...

  20. Memory-Based Shallow Parsing

    OpenAIRE

    Sang, Erik F. Tjong Kim

    2002-01-01

    We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving the performance of the memory-based learner. Our approach is evaluated on standard data sets and the results are compared with that of other systems. This reveals that our approach works well for ba...

  1. Parsing Heterogeneous Striatal Activity

    Directory of Open Access Journals (Sweden)

    Kae Nakamura

    2017-05-01

    Full Text Available The striatum is an input channel of the basal ganglia and is well known to be involved in reward-based decision making and learning. At the macroscopic level, the striatum has been postulated to contain parallel functional modules, each of which includes neurons that perform similar computations to support selection of appropriate actions for different task contexts. At the single-neuron level, however, recent studies in monkeys and rodents have revealed heterogeneity in neuronal activity even within restricted modules of the striatum. Looking for generality in the complex striatal activity patterns, here we briefly survey several types of striatal activity, focusing on their usefulness for mediating behaviors. In particular, we focus on two types of behavioral tasks: reward-based tasks that use salient sensory cues and manipulate outcomes associated with the cues; and perceptual decision tasks that manipulate the quality of noisy sensory cues and associate all correct decisions with the same outcome. Guided by previous insights on the modular organization and general selection-related functions of the basal ganglia, we relate striatal activity patterns on these tasks to two types of computations: implementation of selection and evaluation. We suggest that a parsing with the selection/evaluation categories encourages a focus on the functional commonalities revealed by studies with different animal models and behavioral tasks, instead of a focus on aspects of striatal activity that may be specific to a particular task setting. We then highlight several questions in the selection-evaluation framework for future explorations.

  2. Video Scene Parsing with Predictive Feature Learning

    OpenAIRE

    Jin, Xiaojie; Li, Xin; Xiao, Huaxin; Shen, Xiaohui; Lin, Zhe; Yang, Jimei; Chen, Yunpeng; Dong, Jian; Liu, Luoqi; Jie, Zequn; Feng, Jiashi; Yan, Shuicheng

    2016-01-01

    In this work, we address the challenging video scene parsing problem by developing effective representation learning methods given limited parsing annotations. In particular, we contribute two novel methods that constitute a unified parsing framework. (1) \\textbf{Predictive feature learning}} from nearly unlimited unlabeled video data. Different from existing methods learning features from single frame parsing, we learn spatiotemporal discriminative features by enforcing a parsing network to ...

  3. Perform wordcount Map-Reduce Job in Single Node Apache Hadoop cluster and compress data using Lempel-Ziv-Oberhumer (LZO) algorithm

    OpenAIRE

    Mirajkar, Nandan; Bhujbal, Sandeep; Deshmukh, Aaradhana

    2013-01-01

    Applications like Yahoo, Facebook, Twitter have huge data which has to be stored and retrieved as per client access. This huge data storage requires huge database leading to increase in physical storage and becomes complex for analysis required in business growth. This storage capacity can be reduced and distributed processing of huge data can be done using Apache Hadoop which uses Map-reduce algorithm and combines the repeating data so that entire data is stored in reduced format. The paper ...

  4. Dependency Parsing with Transformed Feature

    Directory of Open Access Journals (Sweden)

    Fuxiang Wu

    2017-01-01

    Full Text Available Dependency parsing is an important subtask of natural language processing. In this paper, we propose an embedding feature transforming method for graph-based parsing, transform-based parsing, which directly utilizes the inner similarity of the features to extract information from all feature strings including the un-indexed strings and alleviate the feature sparse problem. The model transforms the extracted features to transformed features via applying a feature weight matrix, which consists of similarities between the feature strings. Since the matrix is usually rank-deficient because of similar feature strings, it would influence the strength of constraints. However, it is proven that the duplicate transformed features do not degrade the optimization algorithm: the margin infused relaxed algorithm. Moreover, this problem can be alleviated by reducing the number of the nearest transformed features of a feature. In addition, to further improve the parsing accuracy, a fusion parser is introduced to integrate transformed and original features. Our experiments verify that both transform-based and fusion parser improve the parsing accuracy compared to the corresponding feature-based parser.

  5. A Tandem Semantic Interpreter for Incremental Parse Selection

    Science.gov (United States)

    1990-09-28

    syntactic role to semantic role. An exam - ple from Fillmore [10] is the sentence I copied that letter, which can be uttered when point- ing either to...person. We want the word fiddle to have the sort predicate violin as its lexical interpretation, how- ever, notthing. Thus, for Ross went for his fiddle...to receive an interpretation, a sort hierarchy is needed to establish that all violins are things. A well-structured sort hierarchy allows newly added

  6. Contextual Semantic Parsing using Crowdsourced Spatial Descriptions

    OpenAIRE

    Dukes, Kais

    2014-01-01

    We describe a contextual parser for the Robot Commands Treebank, a new crowdsourced resource. In contrast to previous semantic parsers that select the most-probable parse, we consider the different problem of parsing using additional situational context to disambiguate between different readings of a sentence. We show that multiple semantic analyses can be searched using dynamic programming via interaction with a spatial planner, to guide the parsing process. We are able to parse sentences in...

  7. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...

  8. Parsing Universal Dependencies without training

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Agic, Zeljko; Plank, Barbara

    2017-01-01

    We present UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of specific dependency head rules. UDP features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training......, and it is competitive with a delexicalized transfer system. UDP offers a linguistically sound unsupervised alternative to cross-lingual parsing for UD. The parser has very few parameters and distinctly robust to domain change across languages....

  9. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...

  10. Probabilistic lexical generalization for French dependency parsing

    OpenAIRE

    Henestroza Anguiano , Enrique; Candito , Marie

    2012-01-01

    International audience; This paper investigates the impact on French dependency parsing of lexical generalization methods beyond lemmatization and morphological analysis. A distributional thesaurus is created from a large text corpus and used for distributional clustering and WordNet automatic sense ranking. The standard approach for lexical generalization in parsing is to map a word to a single generalized class, either replacing the word with the class or adding a new feature for the class....

  11. Context-free parsing with connectionist networks

    Science.gov (United States)

    Fanty, M. A.

    1986-08-01

    This paper presents a simple algorithm which converts any context-free grammar into a connectionist network which parses strings (of arbitrary but fixed maximum length) in the language defined by that grammar. The network is fast, O(n), and deterministicd. It consists of binary units which compute a simple function of their input. When the grammar is put in Chomsky normal form, O(n3) units needed to parse inputs of length up to n.

  12. Application development with Parse using iOS SDK

    CERN Document Server

    Birani, Bhanu

    2013-01-01

    A practical guide, featuring step-by-step instructions showing you how to use Parse iOS, and handle your data on cloud.If you are a developer who wants to build your applications instantly using Parse iOS as a back end application development, this book is ideal for you. This book will help you to understand Parse, featuring examples to help you get familiar with the concepts of Parse iOS.

  13. A Semantic Constraint on Syntactic Parsing.

    Science.gov (United States)

    Crain, Stephen; Coker, Pamela L.

    This research examines how semantic information influences syntactic parsing decisions during sentence processing. In the first experiment, subjects were presented lexical strings having syntactically identical surface structures but with two possible underlying structures: "The children taught by the Berlitz method," and "The…

  14. Sequence distance via parsing complexity: Heartbeat signals

    International Nuclear Information System (INIS)

    Degli Esposti, M.; Farinelli, C.; Menconi, G.

    2009-01-01

    We compare and discuss the use of different symbolic codings of electrocardiogram (ECG) signals in order to distinguish healthy patients from hospitalized ones. To this aim, we recall a parsing-based similarity distance and compare the performances of several methods of classification of data.

  15. Perceiving Event Dynamics and Parsing Hollywood Films

    Science.gov (United States)

    Cutting, James E.; Brunick, Kaitlin L.; Candan, Ayse

    2012-01-01

    We selected 24 Hollywood movies released from 1940 through 2010 to serve as a film corpus. Eight viewers, three per film, parsed them into events, which are best termed subscenes. While watching a film a second time, viewers scrolled through frames and recorded the frame number where each event began. Viewers agreed about 90% of the time. We then…

  16. Parsing polarization squeezing into Fock layers

    DEFF Research Database (Denmark)

    Mueller, Christian R.; Madsen, Lars Skovgaard; Klimov, Andrei B.

    2016-01-01

    photon number do the methods coincide; when the photon number is indefinite, we parse the state in Fock layers, finding that substantially higher squeezing can be observed in some of the single layers. By capitalizing on the properties of the Husimi Q function, we map this notion onto the Poincare space......, providing a full account of the measured squeezing....

  17. Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture

    Science.gov (United States)

    Lassahn, Gordon D.; Lancaster, Gregory D.; Apel, William A.; Thompson, Vicki S.

    2013-01-08

    Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture are described. According to one embodiment, an image portion identification method includes accessing data regarding an image depicting a plurality of biological substrates corresponding to at least one biological sample and indicating presence of at least one biological indicator within the biological sample and, using processing circuitry, automatically identifying a portion of the image depicting one of the biological substrates but not others of the biological substrates.

  18. A structural SVM approach for reference parsing.

    Science.gov (United States)

    Zhang, Xiaoli; Zou, Jie; Le, Daniel X; Thoma, George R

    2011-06-09

    Automated extraction of bibliographic data, such as article titles, author names, abstracts, and references is essential to the affordable creation of large citation databases. References, typically appearing at the end of journal articles, can also provide valuable information for extracting other bibliographic data. Therefore, parsing individual reference to extract author, title, journal, year, etc. is sometimes a necessary preprocessing step in building citation-indexing systems. The regular structure in references enables us to consider reference parsing a sequence learning problem and to study structural Support Vector Machine (structural SVM), a newly developed structured learning algorithm on parsing references. In this study, we implemented structural SVM and used two types of contextual features to compare structural SVM with conventional SVM. Both methods achieve above 98% token classification accuracy and above 95% overall chunk-level accuracy for reference parsing. We also compared SVM and structural SVM to Conditional Random Field (CRF). The experimental results show that structural SVM and CRF achieve similar accuracies at token- and chunk-levels. When only basic observation features are used for each token, structural SVM achieves higher performance compared to SVM since it utilizes the contextual label features. However, when the contextual observation features from neighboring tokens are combined, SVM performance improves greatly, and is close to that of structural SVM after adding the second order contextual observation features. The comparison of these two methods with CRF using the same set of binary features show that both structural SVM and CRF perform better than SVM, indicating their stronger sequence learning ability in reference parsing.

  19. Telugu dependency parsing using different statistical parsers

    Directory of Open Access Journals (Sweden)

    B. Venkata Seshu Kumari

    2017-01-01

    Full Text Available In this paper we explore different statistical dependency parsers for parsing Telugu. We consider five popular dependency parsers namely, MaltParser, MSTParser, TurboParser, ZPar and Easy-First Parser. We experiment with different parser and feature settings and show the impact of different settings. We also provide a detailed analysis of the performance of all the parsers on major dependency labels. We report our results on test data of Telugu dependency treebank provided in the ICON 2010 tools contest on Indian languages dependency parsing. We obtain state-of-the art performance of 91.8% in unlabeled attachment score and 70.0% in labeled attachment score. To the best of our knowledge ours is the only work which explored all the five popular dependency parsers and compared the performance under different feature settings for Telugu.

  20. Fast Parsing using Pruning and Grammar Specialization

    OpenAIRE

    Rayner, Manny; Carter, David

    1996-01-01

    We show how a general grammar may be automatically adapted for fast parsing of utterances from a specific domain by means of constituent pruning and grammar specialization based on explanation-based learning. These methods together give an order of magnitude increase in speed, and the coverage loss entailed by grammar specialization is reduced to approximately half that reported in previous work. Experiments described here suggest that the loss of coverage has been reduced to the point where ...

  1. Error Parsing: An alternative method of implementing social judgment theory

    OpenAIRE

    Crystal C. Hall; Daniel M. Oppenheimer

    2015-01-01

    We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...

  2. Dual decomposition for parsing with non-projective head automata

    OpenAIRE

    Koo, Terry; Rush, Alexander Matthew; Collins, Michael; Jaakkola, Tommi S.; Sontag, David Alexander

    2010-01-01

    This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for man...

  3. Two models of minimalist, incremental syntactic analysis.

    Science.gov (United States)

    Stabler, Edward P

    2013-07-01

    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.

  4. Design of methodology for incremental compiler construction

    Directory of Open Access Journals (Sweden)

    Pavel Haluza

    2011-01-01

    Full Text Available The paper deals with possibilities of the incremental compiler construction. It represents the compiler construction possibilities for languages with a fixed set of lexical units and for languages with a variable set of lexical units, too. The methodology design for the incremental compiler construction is based on the known algorithms for standard compiler construction and derived for both groups of languages. Under the group of languages with a fixed set of lexical units there belong languages, where each lexical unit has its constant meaning, e.g., common programming languages. For this group of languages the paper tries to solve the problem of the incremental semantic analysis, which is based on incremental parsing. In the group of languages with a variable set of lexical units (e.g., professional typographic system TEX, it is possible to change arbitrarily the meaning of each character on the input file at any time during processing. The change takes effect immediately and its validity can be somehow limited or is given by the end of the input. For this group of languages this paper tries to solve the problem case when we use macros temporarily changing the category of arbitrary characters.

  5. From LL-regular to LL(1) grammars: Transformations, covers and parsing

    NARCIS (Netherlands)

    Nijholt, Antinus

    1982-01-01

    In this paper it is shown that it is possible to transform any LL-regular grammar G into an LL(1) grammar G' in such a way that parsing G' is as good as parsing G. That is, a parse of a sentence of grammar G can be obtained with a simple string homomorphism from the parse of a corresponding sentence

  6. Chinese Unknown Word Recognition for PCFG-LA Parsing

    Directory of Open Access Journals (Sweden)

    Qiuping Huang

    2014-01-01

    Full Text Available This paper investigates the recognition of unknown words in Chinese parsing. Two methods are proposed to handle this problem. One is the modification of a character-based model. We model the emission probability of an unknown word using the first and last characters in the word. It aims to reduce the POS tag ambiguities of unknown words to improve the parsing performance. In addition, a novel method, using graph-based semisupervised learning (SSL, is proposed to improve the syntax parsing of unknown words. Its goal is to discover additional lexical knowledge from a large amount of unlabeled data to help the syntax parsing. The method is mainly to propagate lexical emission probabilities to unknown words by building the similarity graphs over the words of labeled and unlabeled data. The derived distributions are incorporated into the parsing process. The proposed methods are effective in dealing with the unknown words to improve the parsing. Empirical results for Penn Chinese Treebank and TCT Treebank revealed its effectiveness.

  7. On Collocations and Their Interaction with Parsing and Translation

    Directory of Open Access Journals (Sweden)

    Violeta Seretan

    2013-10-01

    Full Text Available We address the problem of automatically processing collocations—a subclass of multi-word expressions characterized by a high degree of morphosyntactic flexibility—in the context of two major applications, namely, syntactic parsing and machine translation. We show that parsing and collocation identification are processes that are interrelated and that benefit from each other, inasmuch as syntactic information is crucial for acquiring collocations from corpora and, vice versa, collocational information can be used to improve parsing performance. Similarly, we focus on the interrelation between collocations and machine translation, highlighting the use of translation information for multilingual collocation identification, as well as the use of collocational knowledge for improving translation. We give a panorama of the existing relevant work, and we parallel the literature surveys with our own experiments involving a symbolic parser and a rule-based translation system. The results show a significant improvement over approaches in which the corresponding tasks are decoupled.

  8. Recursive Neural Networks Based on PSO for Image Parsing

    Directory of Open Access Journals (Sweden)

    Guo-Rong Cai

    2013-01-01

    Full Text Available This paper presents an image parsing algorithm which is based on Particle Swarm Optimization (PSO and Recursive Neural Networks (RNNs. State-of-the-art method such as traditional RNN-based parsing strategy uses L-BFGS over the complete data for learning the parameters. However, this could cause problems due to the nondifferentiable objective function. In order to solve this problem, the PSO algorithm has been employed to tune the weights of RNN for minimizing the objective. Experimental results obtained on the Stanford background dataset show that our PSO-based training algorithm outperforms traditional RNN, Pixel CRF, region-based energy, simultaneous MRF, and superpixel MRF.

  9. Parsing with subdomain instance weighting from raw corpora

    NARCIS (Netherlands)

    Plank, B.; Sima'an, K.

    2008-01-01

    The treebanks that are used for training statistical parsers consist of hand-parsed sentences from a single source/domain like newspaper text. However, newspaper text concerns different subdomains of language use (e.g. finance, sports, politics, music), which implies that the statistics gathered by

  10. Parsing with Subdomain Instance Weighting from Raw Corpora

    NARCIS (Netherlands)

    Plank, Barbara; Sima'an, Khalil

    2008-01-01

    The treebanks that are used for training statistical parsers consist of hand-parsed sentences from a single source/domain like newspaper text. However, newspaper text concerns different subdomains of language use (e.g. finance, sports, politics, music), which implies that the statistics gathered by

  11. A simple DOP model for constituency parsing of Italian sentences

    NARCIS (Netherlands)

    Sangati, F.

    2009-01-01

    We present a simplified Data-Oriented Parsing (DOP) formalism for learning the constituency structure of Italian sentences. In our approach we try to simplify the original DOP methodology by constraining the number and type of fragments we extract from the training corpus. We provide some examples

  12. Finding EL+ justifications using the Earley parsing algorithm

    CSIR Research Space (South Africa)

    Nortje, R

    2009-12-01

    Full Text Available into a reachability preserving context free grammar (CFG). The well known earley algorithm for parsing strings, given some CFG, is then applied to the problem of extracting minimal reachability-based axiom sets for subsumption entailments. The author has...

  13. Advanced programming concepts in a course on grammars and parsing

    NARCIS (Netherlands)

    Jeuring, J.T.; Swierstra, S.D.

    1999-01-01

    One of the important goals of the Computer Science curriculum at Utrecht University is to familiarize students with abstract programming concepts such as, for example, partial evaluation and deforestation. A course on grammars and parsing offers excellent possibilities for exemplifying and

  14. Time-Driven Effects on Parsing during Reading

    Science.gov (United States)

    Roll, Mikael; Lindgren, Magnus; Alter, Kai; Horne, Merle

    2012-01-01

    The phonological trace of perceived words starts fading away in short-term memory after a few seconds. Spoken utterances are usually 2-3 s long, possibly to allow the listener to parse the words into coherent prosodic phrases while they still have a clear representation. Results from this brain potential study suggest that even during silent…

  15. YakYak: Parsing with Logical Side Constraints

    DEFF Research Database (Denmark)

    Hansen, Niels Damgaard; Klarlund, Nils; Schwartzbach, Michael Ignatieff

    2000-01-01

    Yak, which extends Yacc with first-order logic for specifying consteaints that are regular tree languages. Concise formulas about the parse tree replace explicit programming, and they are turned into canonical attribute grammars through tree automata calculations. YakYak is implemented as a proprocessor...

  16. Fuzzy Context- Free Languages. Part 2: Recognition and Parsing Algorithms

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2000-01-01

    In a companion paper \\cite{Asv:FCF1} we used fuzzy context-free grammars in order to model grammatical errors resulting in erroneous inputs for robust recognizing and parsing algorithms for fuzzy context-free languages. In particular, this approach enables us to distinguish between small errors

  17. Planning Through Incrementalism

    Science.gov (United States)

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  18. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  19. Creating Parsing Lexicons from Semantic Lexicons Automatically and Its Applications

    National Research Council Canada - National Science Library

    Ayan, Necip F; Dorr, Bonnie

    2002-01-01

    ...). We also present the effects of using such a lexicon on the parser performance. The advantage of automating the process is that the same technique can be applied directly to lexicons we have for other languages, for example, Arabic, Chinese, and Spanish. The results indicate that our method will help us generate parsing lexicons which can be used by a broad-coverage parser that runs on different languages.

  20. Extending TF1: Argument parsing, function composition, and vectorization

    CERN Document Server

    Tsang Mang Kin, Arthur Leonard

    2017-01-01

    In this project, we extend the functionality of the TF1 function class in root. We add argument parsing, making it possible to freely pass variables and parameters into pre-defined and user-defined functions. We also introduce a syntax to use certain compositions of functions, namely normalized sums and convolutions, directly in TF1. Finally, we introduce some simple vectorization functionality to TF1 and demonstrate the potential to speed up parallelizable computations.

  1. Locating and parsing bibliographic references in HTML medical articles.

    Science.gov (United States)

    Zou, Jie; Le, Daniel; Thoma, George R

    2010-06-01

    The set of references that typically appear toward the end of journal articles is sometimes, though not always, a field in bibliographic (citation) databases. But even if references do not constitute such a field, they can be useful as a preprocessing step in the automated extraction of other bibliographic data from articles, as well as in computer-assisted indexing of articles. Automation in data extraction and indexing to minimize human labor is key to the affordable creation and maintenance of large bibliographic databases. Extracting the components of references, such as author names, article title, journal name, publication date and other entities, is therefore a valuable and sometimes necessary task. This paper describes a two-step process using statistical machine learning algorithms, to first locate the references in HTML medical articles and then to parse them. Reference locating identifies the reference section in an article and then decomposes it into individual references. We formulate this step as a two-class classification problem based on text and geometric features. An evaluation conducted on 500 articles drawn from 100 medical journals achieves near-perfect precision and recall rates for locating references. Reference parsing identifies the components of each reference. For this second step, we implement and compare two algorithms. One relies on sequence statistics and trains a Conditional Random Field. The other focuses on local feature statistics and trains a Support Vector Machine to classify each individual word, followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules. The overall performance of these two reference-parsing algorithms is about the same: above 99% accuracy at the word level, and over 97% accuracy at the chunk level.

  2. KEGGParser: parsing and editing KEGG pathway maps in Matlab.

    Science.gov (United States)

    Arakelyan, Arsen; Nersisyan, Lilit

    2013-02-15

    KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.

  3. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2005-01-01

    This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.

  4. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2006-01-01

    This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.

  5. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  6. Analysis of Azari Language based on Parsing using Link Gram

    Directory of Open Access Journals (Sweden)

    Maryam Arabzadeh

    2014-09-01

    Full Text Available There are different classes of theories for the natural lanuguage syntactic parsing problem and for creating the related grammars .This paper presents a syntactic grammar developed in the link grammar formalism for Turkish which is an agglutinative language. In the link grammar formalism, the words of a sentence are linked with each other depending on their syntactic roles. Turkish has complex derivational and inflectional morphology, and derivational and inflection morphemes play important syntactic roles in the sentences. In order to develop a link grammar for Turkish, the lexical parts in the morphological representations of Turkish words are removed, and the links are created depending on the part of speech tags and inflectional morphemes in words. Furthermore, a derived word is separated at the derivational boundaries in order to treat each derivation morpheme as a special distinct word, and allow it to be linked with the rest of the sentence. The derivational morphemes of a word are also linked with each other with special links to indicate that they are parts of the same word. Finally the adapted unique link grammar formalism for Turkish provides flexibility for the linkage construction, and similar methods can be used for other languages with complex morphology. Finally, using the Delphi programming language, the link grammar related to the Azeri language was developed and implemented and then by selecting 250 random sentences, this grammar is evaluated and then tested. For 84.31% of the sentences, the result set of the parser contains the correct parse.

  7. Automated vocabulary discovery for geo-parsing online epidemic intelligence.

    Science.gov (United States)

    Keller, Mikaela; Freifeld, Clark C; Brownstein, John S

    2009-11-24

    Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  8. Automated vocabulary discovery for geo-parsing online epidemic intelligence

    Directory of Open Access Journals (Sweden)

    Freifeld Clark C

    2009-11-01

    Full Text Available Abstract Background Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Results Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. Conclusion The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  9. Parsing a cognitive task: a characterization of the mind's bottleneck.

    Directory of Open Access Journals (Sweden)

    Mariano Sigman

    2005-02-01

    Full Text Available Parsing a mental operation into components, characterizing the parallel or serial nature of this flow, and understanding what each process ultimately contributes to response time are fundamental questions in cognitive neuroscience. Here we show how a simple theoretical model leads to an extended set of predictions concerning the distribution of response time and its alteration by simultaneous performance of another task. The model provides a synthesis of psychological refractory period and random-walk models of response time. It merely assumes that a task consists of three consecutive stages-perception, decision based on noisy integration of evidence, and response-and that the perceptual and motor stages can operate simultaneously with stages of another task, while the central decision process constitutes a bottleneck. We designed a number-comparison task that provided a thorough test of the model by allowing independent variations in number notation, numerical distance, response complexity, and temporal asynchrony relative to an interfering probe task of tone discrimination. The results revealed a parsing of the comparison task in which each variable affects only one stage. Numerical distance affects the integration process, which is the only step that cannot proceed in parallel and has a major contribution to response time variability. The other stages, mapping the numeral to an internal quantity and executing the motor response, can be carried out in parallel with another task. Changing the duration of these processes has no significant effect on the variance.

  10. Relative clauses as a benchmark for Minimalist parsing

    Directory of Open Access Journals (Sweden)

    Thomas Graf

    2017-07-01

    Full Text Available Minimalist grammars have been used recently in a series of papers to explain well-known contrasts in human sentence processing in terms of subtle structural differences. These proposals combine a top-down parser with complexity metrics that relate parsing difficulty to memory usage. So far, though, there has been no large-scale exploration of the space of viable metrics. Building on this earlier work, we compare the ability of 1600 metrics to derive several processing effects observed with relative clauses, many of which have been proven difficult to unify. We show that among those 1600 candidates, a few metrics (and only a few can provide a unified account of all these contrasts. This is a welcome result for two reasons: First, it provides a novel account of extensively studied psycholinguistic data. Second, it significantly limits the number of viable metrics that may be applied to other phenomena, thus reducing theoretical indeterminacy.

  11. Motion based parsing for video from observational psychology

    Science.gov (United States)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  12. Machine learning to parse breast pathology reports in Chinese.

    Science.gov (United States)

    Tang, Rong; Ouyang, Lizhi; Li, Clara; He, Yue; Griffin, Molly; Taghian, Alphonse; Smith, Barbara; Yala, Adam; Barzilay, Regina; Hughes, Kevin

    2018-01-29

    Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages.

  13. Using machine learning to parse breast pathology reports.

    Science.gov (United States)

    Yala, Adam; Barzilay, Regina; Salama, Laura; Griffin, Molly; Sollender, Grace; Bardia, Aditya; Lehman, Constance; Buckley, Julliette M; Coopey, Suzanne B; Polubriaginof, Fernanda; Garber, Judy E; Smith, Barbara L; Gadd, Michele A; Specht, Michelle C; Gudewicz, Thomas M; Guidi, Anthony J; Taghian, Alphonse; Hughes, Kevin S

    2017-01-01

    Extracting information from electronic medical record is a time-consuming and expensive process when done manually. Rule-based and machine learning techniques are two approaches to solving this problem. In this study, we trained a machine learning model on pathology reports to extract pertinent tumor characteristics, which enabled us to create a large database of attribute searchable pathology reports. This database can be used to identify cohorts of patients with characteristics of interest. We collected a total of 91,505 breast pathology reports from three Partners hospitals: Massachusetts General Hospital, Brigham and Women's Hospital, and Newton-Wellesley Hospital, covering the period from 1978 to 2016. We trained our system with annotations from two datasets, consisting of 6295 and 10,841 manually annotated reports. The system extracts 20 separate categories of information, including atypia types and various tumor characteristics such as receptors. We also report a learning curve analysis to show how much annotation our model needs to perform reasonably. The model accuracy was tested on 500 reports that did not overlap with the training set. The model achieved accuracy of 90% for correctly parsing all carcinoma and atypia categories for a given patient. The average accuracy for individual categories was 97%. Using this classifier, we created a database of 91,505 parsed pathology reports. Our learning curve analysis shows that the model can achieve reasonable results even when trained on a few annotations. We developed a user-friendly interface to the database that allows physicians to easily identify patients with target characteristics and export the matching cohort. This model has the potential to reduce the effort required for analyzing large amounts of data from medical records, and to minimize the cost and time required to glean scientific insight from these data.

  14. Legislative Bargaining and Incremental Budgeting

    OpenAIRE

    Dhammika Dharmapala

    2002-01-01

    The notion of 'incrementalism', formulated by Aaron Wildavsky in the 1960's, has been extremely influential in the public budgeting literature. In essence, it entails the claim that legislators engaged in budgetary policymaking accept past allocations, and decide only on the allocation of increments to revenue. Wildavsky explained incrementalism with reference to the cognitive limitations of lawmakers and their desire to reduce conflict. This paper uses a legislative bargaining framework to u...

  15. Chomsky-Schützenberger parsing for weighted multiple context-free languages

    Directory of Open Access Journals (Sweden)

    Tobias Denkinger

    2017-07-01

    Full Text Available We prove a Chomsky-Schützenberger representation theorem for multiple context-free languages weighted over complete commutative strong bimonoids. Using this representation we devise a parsing algorithm for a restricted form of those devices.

  16. Grammar-Based Specification and Parsing of Binary File Formats

    Directory of Open Access Journals (Sweden)

    William Underwood

    2012-03-01

    Full Text Available The capability to validate and view or play binary file formats, as well as to convert binary file formats to standard or current file formats, is critically important to the preservation of digital data and records. This paper describes the extension of context-free grammars from strings to binary files. Binary files are arrays of data types, such as long and short integers, floating-point numbers and pointers, as well as characters. The concept of an attribute grammar is extended to these context-free array grammars. This attribute grammar has been used to define a number of chunk-based and directory-based binary file formats. A parser generator has been used with some of these grammars to generate syntax checkers (recognizers for validating binary file formats. Among the potential benefits of an attribute grammar-based approach to specification and parsing of binary file formats is that attribute grammars not only support format validation, but support generation of error messages during validation of format, validation of semantic constraints, attribute value extraction (characterization, generation of viewers or players for file formats, and conversion to current or standard file formats. The significance of these results is that with these extensions to core computer science concepts, traditional parser/compiler technologies can potentially be used as a part of a general, cost effective curation strategy for binary file formats.

  17. Parsing (malicious pleasures: Schadenfreude and gloating at others’ adversity

    Directory of Open Access Journals (Sweden)

    Colin Wayne Leach

    2015-02-01

    Full Text Available We offer the first empirical comparison of the pleasure in seeing (i.e., schadenfreude and in causing (i.e., gloating others’ adversity. In Study 1, we asked participants to recall and report on an (individual or group episode of pleasure that conformed to our formal definition of schadenfreude, gloating, pride, or joy, without reference to an emotion word. Schadenfreude and gloating were distinct in the situational features of the episode, participants’ appraisals of it, and their expressions of pleasure (e.g., smiling, boasting. In Study 2, we had participants imagine being in an (individual or group emotion episode designed to fit our conceptualization of schadenfreude or gloating. Individual and group versions of the emotions did not differ much in either study. However, the two pleasures differed greatly in their situational features, appraisals, experience, and expression. This parsing of the particular pleasures of schadenfreude and gloating brings nuance to the study of (malicious pleasure, which tends to be less finely conceptualized and examined than displeasure despite its importance to social relations.

  18. Toward the Soundness of Sense Structure Definitions in Thesaurus-Dictionaries. Parsing Problems and Solutions

    Directory of Open Access Journals (Sweden)

    Neculai Curteanu

    2012-10-01

    Full Text Available In this paper we point out some difficult problems of thesaurus-dictionary entry parsing, relying on the parsing technology of SCD (Segmentation-Cohesion-Dependency configurations, successfully applied on six largest thesauri -- Romanian (2, French, German (2, and Russian. \\textbf{Challenging Problems:} \\textbf{(a}~Intricate and~/~or recursive structures of the lexicographic segments met in the entries of certain thesauri; \\textbf{(b}~Cyclicity (recursive calls of some sense marker classes on marker sequences; \\textbf{(c}~Establishing the hypergraph-driven dependencies between all the atomic and non-atomic sense definitions. Classical approach to solve these parsing problems is hard mainly because of depth-first search of sense definitions and markers, the substantial complexity of entries, and the sense tree dynamic construction embodied within these parsers. \\textbf{SCD-based Parsing Solutions:} \\textbf{(a}~The SCD parsing method is a procedural tool, completely formal grammar-free, handling the recursive structure of the lexicographic segments by procedural non-recursive calls performed on the SCD parsing configurations of the entry structure. \\textbf{(b}~For dealing with cyclicity (recursive calls between secondary sense markers and the sense enumeration markers, we proposed the Enumeration Closing Condition, sometimes coupled with New{\\_}Paragraphs typographic markers transformed into numeral sense enumeration. \\textbf{(c}~These problems, their lexicographic modeling and parsing solutions are addressed to both dictionary parser programmers to experience the SCD-based parsing method, as well as to lexicographers and thesauri designers for tailoring balanced lexical-semantics granularities and sounder sense tree definitions of the dictionary entries.

  19. FEM Simulation of Incremental Shear

    International Nuclear Information System (INIS)

    Rosochowski, Andrzej; Olejnik, Lech

    2007-01-01

    A popular way of producing ultrafine grained metals on a laboratory scale is severe plastic deformation. This paper introduces a new severe plastic deformation process of incremental shear. A finite element method simulation is carried out for various tool geometries and process kinematics. It has been established that for the successful realisation of the process the inner radius of the channel as well as the feeding increment should be approximately 30% of the billet thickness. The angle at which the reciprocating die works the material can be 30 deg. . When compared to equal channel angular pressing, incremental shear shows basic similarities in the mode of material flow and a few technological advantages which make it an attractive alternative to the known severe plastic deformation processes. The most promising characteristic of incremental shear is the possibility of processing very long billets in a continuous way which makes the process more industrially relevant

  20. FDTD Stability: Critical Time Increment

    OpenAIRE

    Z. Skvor; L. Pauk

    2003-01-01

    A new approach suitable for determination of the maximal stable time increment for the Finite-Difference Time-Domain (FDTD) algorithm in common curvilinear coordinates, for general mesh shapes and certain types of boundaries is presented. The maximal time increment corresponds to a characteristic value of a Helmholz equation that is solved by a finite-difference (FD) method. If this method uses exactly the same discretization as the given FDTD method (same mesh, boundary conditions, order of ...

  1. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  2. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  3. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  4. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  5. Convergent systems vs. incremental stability

    NARCIS (Netherlands)

    Rüffer, B.S.; Wouw, van de N.; Mueller, M.

    2013-01-01

    Two similar stability notions are considered; one is the long established notion of convergent systems, the other is the younger notion of incremental stability. Both notions require that any two solutions of a system converge to each other. Yet these stability concepts are different, in the sense

  6. Generative re-ranking model for dependency parsing of Italian sentences

    NARCIS (Netherlands)

    Sangati, F.

    2009-01-01

    We present a general framework for dependency parsing of Italian sentences based on a combination of discriminative and generative models. We use a state-of-the-art discriminative model to obtain a k-best list of candidate structures for the test sentences, and use the generative model to compute

  7. Towards a Robuster Interpretive Parsing: learning from overt forms in Optimality Theory

    NARCIS (Netherlands)

    Biró, T.

    2013-01-01

    The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this

  8. Single-View 3D Scene Reconstruction and Parsing by Attribute Grammar.

    Science.gov (United States)

    Liu, Xiaobai; Zhao, Yibiao; Zhu, Song-Chun

    2018-03-01

    In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing a 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g., camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.

  9. Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations

    NARCIS (Netherlands)

    van Noord, Rik; Bos, Johannes

    2017-01-01

    We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs). Using a sequence-to-sequence model, and some trivial preprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1

  10. ParseCNV integrative copy number variation association software with quality tracking.

    Science.gov (United States)

    Glessner, Joseph T; Li, Jin; Hakonarson, Hakon

    2013-03-01

    A number of copy number variation (CNV) calling algorithms exist; however, comprehensive software tools for CNV association studies are lacking. We describe ParseCNV, unique software that takes CNV calls and creates probe-based statistics for CNV occurrence in both case-control design and in family based studies addressing both de novo and inheritance events, which are then summarized based on CNV regions (CNVRs). CNVRs are defined in a dynamic manner to allow for a complex CNV overlap while maintaining precise association region. Using this approach, we avoid failure to converge and non-monotonic curve fitting weaknesses of programs, such as CNVtools and CNVassoc, and although Plink is easy to use, it only provides combined CNV state probe-based statistics, not state-specific CNVRs. Existing CNV association methods do not provide any quality tracking information to filter confident associations, a key issue which is fully addressed by ParseCNV. In addition, uncertainty in CNV calls underlying CNV associations is evaluated to verify significant results, including CNV overlap profiles, genomic context, number of probes supporting the CNV and single-probe intensities. When optimal quality control parameters are followed using ParseCNV, 90% of CNVs validate by polymerase chain reaction, an often problematic stage because of inadequate significant association review. ParseCNV is freely available at http://parsecnv.sourceforge.net.

  11. Introduction to special issue on machine learning approaches to shallow parsing

    NARCIS (Netherlands)

    Hammerton, J; Osborne, M; Armstrong, S; Daelemans, W

    2002-01-01

    This article introduces the problem of partial or shallow parsing (assigning partial syntactic structure to sentences) and explains why it is an important natural language processing (NLP) task. The complexity of the task makes Machine Learning an attractive option in comparison to the handcrafting

  12. Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

    NARCIS (Netherlands)

    Le, M.N.; Fokkens, A.S.

    Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error

  13. Fuzzy context-free languages - Part 2: Recognition and parsing algorithms

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2005-01-01

    In a companion paper [P.R.J. Asveld, Fuzzy context-free languages---Part 1: Generalized fuzzy context-free grammars, Theoret. Comp. Sci. (2005)] we used fuzzy context-free grammars in order to model grammatical errors resulting in erroneous inputs for robust recognizing and parsing algorithms for

  14. Integrating high dimensional bi-directional parsing models for gene mention tagging.

    Science.gov (United States)

    Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang

    2008-07-01

    Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene

  15. Unmanned Maritime Systems Incremental Acquisition Approach

    Science.gov (United States)

    2016-12-01

    REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH 5. FUNDING...Approved for public release. Distribution is unlimited. UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH Thomas Driscoll, Lieutenant...UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH ABSTRACT The purpose of this MBA report is to explore and understand the issues

  16. Incremental deformation: A literature review

    Directory of Open Access Journals (Sweden)

    Nasulea Daniel

    2017-01-01

    Full Text Available Nowadays the customer requirements are in permanent changing and according with them the tendencies in the modern industry is to implement flexible manufacturing processes. In the last decades, metal forming gained attention of the researchers and considerable changes has occurred. Because for a small number of parts, the conventional metal forming processes are expensive and time-consuming in terms of designing and manufacturing preparation, the manufacturers and researchers became interested in flexible processes. One of the most investigated flexible processes in metal forming is incremental sheet forming (ISF. ISF is an advanced flexible manufacturing process which allows to manufacture complex 3D products without expensive dedicated tools. In most of the cases it is needed for an ISF process the following: a simple tool, a fixing device for sheet metal blank and a universal CNC machine. Using this process it can be manufactured axis-symmetric parts, usually using a CNC lathe but also complex asymmetrical parts using CNC milling machines, robots or dedicated equipment. This paper aim to present the current status of incremental sheet forming technologies in terms of process parameters and their influences, wall thickness distribution, springback effect, formability, surface quality and the current main research directions.

  17. Pippi — Painless parsing, post-processing and plotting of posterior and likelihood samples

    Science.gov (United States)

    Scott, Pat

    2012-11-01

    Interpreting samples from likelihood or posterior probability density functions is rarely as straightforward as it seems it should be. Producing publication-quality graphics of these distributions is often similarly painful. In this short note I describe pippi, a simple, publicly available package for parsing and post-processing such samples, as well as generating high-quality PDF graphics of the results. Pippi is easily and extensively configurable and customisable, both in its options for parsing and post-processing samples, and in the visual aspects of the figures it produces. I illustrate some of these using an existing supersymmetric global fit, performed in the context of a gamma-ray search for dark matter. Pippi can be downloaded and followed at http://github.com/patscott/pippi.

  18. Fetching and Parsing Data from the Web with OpenRefine

    Directory of Open Access Journals (Sweden)

    Evan Peter Williamson

    2017-08-01

    Full Text Available OpenRefine is a powerful tool for exploring, cleaning, and transforming data. An earlier Programming Historian lesson, “Cleaning Data with OpenRefine”, introduced the basic functionality of Refine to efficiently discover and correct inconsistency in a data set. Building on those essential data wrangling skills, this lesson focuses on Refine’s ability to fetch URLs and parse web content. Examples introduce some of the advanced features to transform and enhance a data set including: - fetch URLs using Refine - construct URL queries to retrieve information from a simple web API - parse HTML and JSON responses to extract relevant data - use array functions to manipulate string values - use Jython to extend Refine’s functionality It will be helpful to have basic familiarity with OpenRefine, HTML, and programming concepts such as variables and loops to complete this lesson.

  19. Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

    OpenAIRE

    Le, Minh; Fokkens, Antske

    2017-01-01

    Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error propagation. Reinforcement learning improves accuracy of both labeled and unlabeled dependencies of the Stanford Neural Dependency Parser, a high performance greedy parser, while maintaining its eff...

  20. Cross-Lingual Dependency Parsing with Late Decoding for Truly Low-Resource Languages

    OpenAIRE

    Schlichtkrull, Michael Sejr; Søgaard, Anders

    2017-01-01

    In cross-lingual dependency annotation projection, information is often lost during transfer because of early decoding. We present an end-to-end graph-based neural network dependency parser that can be trained to reproduce matrices of edge scores, which can be directly projected across word alignments. We show that our approach to cross-lingual dependency parsing is not only simpler, but also achieves an absolute improvement of 2.25% averaged across 10 languages compared to the previous state...

  1. Is human sentence parsing serial or parallel? Evidence from event-related brain potentials.

    Science.gov (United States)

    Hopf, Jens-Max; Bader, Markus; Meng, Michael; Bayer, Josef

    2003-01-01

    In this ERP study we investigate the processes that occur in syntactically ambiguous German sentences at the point of disambiguation. Whereas most psycholinguistic theories agree on the view that processing difficulties arise when parsing preferences are disconfirmed (so-called garden-path effects), important differences exist with respect to theoretical assumptions about the parser's recovery from a misparse. A key distinction can be made between parsers that compute all alternative syntactic structures in parallel (parallel parsers) and parsers that compute only a single preferred analysis (serial parsers). To distinguish empirically between parallel and serial parsing models, we compare ERP responses to garden-path sentences with ERP responses to truly ungrammatical sentences. Garden-path sentences contain a temporary and ultimately curable ungrammaticality, whereas truly ungrammatical sentences remain so permanently--a difference which gives rise to different predictions in the two classes of parsing architectures. At the disambiguating word, ERPs in both sentence types show negative shifts of similar onset latency, amplitude, and scalp distribution in an initial time window between 300 and 500 ms. In a following time window (500-700 ms), the negative shift to garden-path sentences disappears at right central parietal sites, while it continues in permanently ungrammatical sentences. These data are taken as evidence for a strictly serial parser. The absence of a difference in the early time window indicates that temporary and permanent ungrammaticalities trigger the same kind of parsing responses. Later differences can be related to successful reanalysis in garden-path but not in ungrammatical sentences. Copyright 2003 Elsevier Science B.V.

  2. Learning for Semantic Parsing with Kernels under Various Forms of Supervision

    Science.gov (United States)

    2007-08-01

    natural language sentences to their formal executable meaning representations. This is a challenging problem and is critical for developing computing...sentences are semantically tractable. This indi- cates that Geoquery is more challenging domain for semantic parsing than ATIS. In the past, there have been a...Combining parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -99), pp. 187–194

  3. FDTD Stability: Critical Time Increment

    Directory of Open Access Journals (Sweden)

    Z. Skvor

    2003-06-01

    Full Text Available A new approach suitable for determination of the maximal stable timeincrement for the Finite-Difference Time-Domain (FDTD algorithm incommon curvilinear coordinates, for general mesh shapes and certaintypes of boundaries is presented. The maximal time incrementcorresponds to a characteristic value of a Helmholz equation that issolved by a finite-difference (FD method. If this method uses exactlythe same discretization as the given FDTD method (same mesh, boundaryconditions, order of precision etc., the maximal stable time incrementis obtained from the highest characteristic value. The FD system issolved by an iterative method, which uses only slightly alteredoriginal FDTD formulae. The Courant condition yields a stable timeincrement, but in certain cases the maximum increment is slightlygreater [2].

  4. Incremental Observer Relative Data Extraction

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave...... or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  5. "gnparser": a powerful parser for scientific names based on Parsing Expression Grammar.

    Science.gov (United States)

    Mozzherin, Dmitry Y; Myltsev, Alexander A; Patterson, David J

    2017-05-26

    Scientific names in biology act as universal links. They allow us to cross-reference information about organisms globally. However variations in spelling of scientific names greatly diminish their ability to interconnect data. Such variations may include abbreviations, annotations, misspellings, etc. Authorship is a part of a scientific name and may also differ significantly. To match all possible variations of a name we need to divide them into their elements and classify each element according to its role. We refer to this as 'parsing' the name. Parsing categorizes name's elements into those that are stable and those that are prone to change. Names are matched first by combining them according to their stable elements. Matches are then refined by examining their varying elements. This two stage process dramatically improves the number and quality of matches. It is especially useful for the automatic data exchange within the context of "Big Data" in biology. We introduce Global Names Parser (gnparser). It is a Java tool written in Scala language (a language for Java Virtual Machine) to parse scientific names. It is based on a Parsing Expression Grammar. The parser can be applied to scientific names of any complexity. It assigns a semantic meaning (such as genus name, species epithet, rank, year of publication, authorship, annotations, etc.) to all elements of a name. It is able to work with nested structures as in the names of hybrids. gnparser performs with ≈99% accuracy and processes 30 million name-strings/hour per CPU thread. The gnparser library is compatible with Scala, Java, R, Jython, and JRuby. The parser can be used as a command line application, as a socket server, a web-app or as a RESTful HTTP-service. It is released under an Open source MIT license. Global Names Parser (gnparser) is a fast, high precision tool for biodiversity informaticians and biologists working with large numbers of scientific names. It can replace expensive and error

  6. On excursion increments in heartbeat dynamics

    International Nuclear Information System (INIS)

    Guzmán-Vargas, L.; Reyes-Ramírez, I.; Hernández-Pérez, R.

    2013-01-01

    We study correlation properties of excursion increments of heartbeat time series from healthy subjects and heart failure patients. We construct the excursion time based on the original heartbeat time series, representing the time employed by the walker to return to the local mean value. Next, the detrended fluctuation analysis and the fractal dimension method are applied to the magnitude and sign of the increments in the time excursions between successive excursions for the mentioned groups. Our results show that for magnitude series of excursion increments both groups display long-range correlations with similar correlation exponents, indicating that large (small) increments (decrements) are more likely to be followed by large (small) increments (decrements). For sign sequences and for both groups, we find that increments are short-range anti-correlated, which is noticeable under heart failure conditions

  7. Increment memory module for spectrometric data recording

    International Nuclear Information System (INIS)

    Zhuchkov, A.A.; Myagkikh, A.I.

    1988-01-01

    Incremental memory unit designed to input differential energy spectra of nuclear radiation is described. ROM application as incremental device has allowed to reduce the number of elements and do simplify information readout from the unit. 12-bit 2048 channels present memory unit organization. The device is connected directly with the bus of microprocessor systems similar to KR 580. Incrementation maximal time constitutes 3 mks. It is possible to use this unit in multichannel counting mode

  8. Small Diameter Bomb Increment II (SDB II)

    Science.gov (United States)

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense... Bomb Increment II (SDB II) DoD Component Air Force Joint Participants Department of the Navy Responsible Office References SAR Baseline (Production...Mission and Description Small Diameter Bomb Increment II (SDB II) is a joint interest United States Air Force (USAF) and Department of the Navy

  9. Incremental Query Rewriting with Resolution

    Science.gov (United States)

    Riazanov, Alexandre; Aragão, Marcelo A. T.

    We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.

  10. Parsing and Quantification of Raw Orbitrap Mass Spectrometer Data Using RawQuant.

    Science.gov (United States)

    Kovalchik, Kevin A; Moggridge, Sophie; Chen, David D Y; Morin, Gregg B; Hughes, Christopher S

    2018-06-01

    Effective analysis of protein samples by mass spectrometry (MS) requires careful selection and optimization of a range of experimental parameters. As the output from the primary detection device, the "raw" MS data file can be used to gauge the success of a given sample analysis. However, the closed-source nature of the standard raw MS file can complicate effective parsing of the data contained within. To ease and increase the range of analyses possible, the RawQuant tool was developed to enable parsing of raw MS files derived from Thermo Orbitrap instruments to yield meta and scan data in an openly readable text format. RawQuant can be commanded to export user-friendly files containing MS 1 , MS 2 , and MS 3 metadata as well as matrices of quantification values based on isobaric tagging approaches. In this study, the utility of RawQuant is demonstrated in several scenarios: (1) reanalysis of shotgun proteomics data for the identification of the human proteome, (2) reanalysis of experiments utilizing isobaric tagging for whole-proteome quantification, and (3) analysis of a novel bacterial proteome and synthetic peptide mixture for assessing quantification accuracy when using isobaric tags. Together, these analyses successfully demonstrate RawQuant for the efficient parsing and quantification of data from raw Thermo Orbitrap MS files acquired in a range of common proteomics experiments. In addition, the individual analyses using RawQuant highlights parametric considerations in the different experimental sets and suggests targetable areas to improve depth of coverage in identification-focused studies and quantification accuracy when using isobaric tags.

  11. A hierarchical methodology for urban facade parsing from TLS point clouds

    Science.gov (United States)

    Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao

    2017-01-01

    The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.

  12. PyParse: a semiautomated system for scoring spoken recall data.

    Science.gov (United States)

    Solway, Alec; Geller, Aaron S; Sederberg, Per B; Kahana, Michael J

    2010-02-01

    Studies of human memory often generate data on the sequence and timing of recalled items, but scoring such data using conventional methods is difficult or impossible. We describe a Python-based semiautomated system that greatly simplifies this task. This software, called PyParse, can easily be used in conjunction with many common experiment authoring systems. Scored data is output in a simple ASCII format and can be accessed with the programming language of choice, allowing for the identification of features such as correct responses, prior-list intrusions, extra-list intrusions, and repetitions.

  13. Mobile Backend as a Service: the pros and cons of parse

    OpenAIRE

    Nguyen, Phu

    2016-01-01

    Using a pre-built backend for an application is an affordable and swift approach to prototyping new application ideas. Mobile Backend as a Service (MBaaS) is the term for pre-built backend systems that developers can use. However, it is advisable to understand the pros and the cons of an MBaaS before deciding to use it. The aim of the thesis was to determine the advantages and disadvantages of using Parse, a provider of mobile backend as a service, in application development. Parse’s defin...

  14. (Invariability in the Samoan syntax/prosody interface and consequences for syntactic parsing

    Directory of Open Access Journals (Sweden)

    Kristine M. Yu

    2017-10-01

    Full Text Available While it has long been clear that prosody should be part of the grammar influencing the action of the syntactic parser, how to bring prosody into computational models of syntactic parsing has remained unclear. The challenge is that prosodic information in the speech signal is the result of the interaction of a multitude of conditioning factors. From this output, how can we factor out the contribution of syntax to conditioning prosodic events? And if we are able to do that factorization and define a production model from the syntactic grammar to a prosodified utterance, how can we then define a comprehension model based on that production model? In this case study of the Samoan morphosyntax-prosody interface, we show how to factor out the influence of syntax on prosody in empirical work and confirm there is invariable morphosyntactic conditioning of high edge tones. Then, we show how this invariability can be precisely characterized and used by a parsing model that factors the various influences of morphosyntax on tonal events. We expect that models of these kinds can be extended to more comprehensive perspectives on Samoan and to languages where the syntax/prosody coupling is more complex.

  15. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  16. Parsing partial molar volumes of small molecules: a molecular dynamics study.

    Science.gov (United States)

    Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V

    2011-04-28

    We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.

  17. Process of 3D wireless decentralized sensor deployment using parsing crossover scheme

    Directory of Open Access Journals (Sweden)

    Albert H.R. Ko

    2015-07-01

    Full Text Available A Wireless Sensor Networks (WSN usually consists of numerous wireless devices deployed in a region of interest, each able to collect and process environmental information and communicate with neighboring devices. It can thus be regarded as a Multi-Agent System for territorial security, where individual agents cooperate with each other to avoid duplication of effort and to exploit other agent’s capacities. The problem of sensor deployment becomes non-trivial when we consider environmental factors, such as terrain elevations. Due to the fact that all sensors are homogeneous, the chromosomes that encode sensor positions are actually interchangeable, and conventional crossover schemes such as uniform crossover would cause some redundancy as well as over-concentration in certain specific geographical area. We propose a Parsing Crossover Scheme that intends to reduce redundancy and ease geographical concentration pattern in an effort to facilitate the search. The proposed parsing crossover method demonstrates better performances than those of uniform crossover under different terrain irregularities.

  18. Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.

    Science.gov (United States)

    Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun

    2017-07-25

    This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.

  19. Efficient Incremental Checkpointing of Java Programs

    DEFF Research Database (Denmark)

    Lawall, Julia Laetitia; Muller, Gilles

    2000-01-01

    This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...

  20. Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.

    Science.gov (United States)

    Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana

    2017-08-09

    The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech

  1. Cuidado de enfermagem a pessoas com hipertensão fundamentado na teoria de Parse Atención de enfermería a personas con hipertensión basada en la teoría de Parse Nursing care to people with hypertension based on Parse's theory

    Directory of Open Access Journals (Sweden)

    Fabíola Vládia Freire da Silva

    2013-03-01

    Full Text Available Este estudo propõe o cuidado de enfermagem, baseado nos princípios de Parse, a pessoas com hipertensão consultadas na Estratégia Saúde da Família. Estudo descritivo, de cunho qualitativo, realizado de março a maio de 2011, com quatorze enfermeiros no município de Itapajé-Ceará. Para coleta das informações utilizou-se a entrevista semiestruturada e, para análise, o discurso dos sujeitos. Emergiram três categorias baseadas nos princípios de Parse: Multidimensão dos significados - o enfermeiro conduz ao relato dos significados; Sincronização de ritmos - o enfermeiro ajuda a identificar harmonia e desarmonia; Mobilização da transcendência - o enfermeiro guia o plano de mudanças. Notou-se aproximação dos discursos ao teorizado por Parse quando citaram buscar um cuidado humanizado, com a participação da família, valorização da autonomia, utilização da educação em saúde, com orientações individuais. Percebeu-se a viabilidade na implementação do cuidado de enfermagem fundamentado na Teoria de Parse a pessoas com hipertensão.Este estudio propone la atención de enfermería, basada en los principios de Parse, para personas con hipertensión en la Estrategia de Salud Familiar. Estudio descriptivo, cualitativo, realizado de marzo a mayo/2011, con catorce enfermeros en Itapajé, Ceará. Para la recolección de las informaciones, se utilizó la entrevista semiestructurada y para análisis, el discurso de los sujetos. Emergieron tres categorías: Multidimensiones de los significados - el enfermero conduce al relato de los significados; Sincronización de los ritmos - el enfermero ayuda a identificar armonía y desarmonía; Movilización de la trascendencia - el enfermero guía el plan de cambios. Se observó semejanzas entre el enfoque de los discursos y la Teoría de Parse, cuando citaron la búsqueda de atención humanizada, con participación de la familia, valoración de la autonomía, uso de la educación en salud

  2. GFFview: A Web Server for Parsing and Visualizing Annotation Information of Eukaryotic Genome.

    Science.gov (United States)

    Deng, Feilong; Chen, Shi-Yi; Wu, Zhou-Lin; Hu, Yongsong; Jia, Xianbo; Lai, Song-Jia

    2017-10-01

    Owing to wide application of RNA sequencing (RNA-seq) technology, more and more eukaryotic genomes have been extensively annotated, such as the gene structure, alternative splicing, and noncoding loci. Annotation information of genome is prevalently stored as plain text in General Feature Format (GFF), which could be hundreds or thousands Mb in size. Therefore, it is a challenge for manipulating GFF file for biologists who have no bioinformatic skill. In this study, we provide a web server (GFFview) for parsing the annotation information of eukaryotic genome and then generating statistical description of six indices for visualization. GFFview is very useful for investigating quality and difference of the de novo assembled transcriptome in RNA-seq studies.

  3. “Less of the Heroine than the Woman”: Parsing Gender in the British Novel

    Directory of Open Access Journals (Sweden)

    Susan Carlile

    2017-06-01

    Full Text Available This essay offers two methods that will help students resist the temptation to judge eighteenth-century novels by twenty-first-century standards. These methods prompt students to parse the question of whether female protagonists in novels—in this case, Daniel Defoe’s Roxana (1724, Samuel Johnson’s Rasselas (1759, and Charlotte Lennox’s Sophia (1762—are portrayed as perfect models or as complex humans. The first method asks them to engage with definitions of the term “heroine,” and the second method uses word clouds to extend their thinking about the complexity of embodying a mid-eighteenth-century female identity.

  4. A Python package for parsing, validating, mapping and formatting sequence variants using HGVS nomenclature.

    Science.gov (United States)

    Hart, Reece K; Rico, Rudolph; Hare, Emily; Garcia, John; Westbrook, Jody; Fusaro, Vincent A

    2015-01-15

    Biological sequence variants are commonly represented in scientific literature, clinical reports and databases of variation using the mutation nomenclature guidelines endorsed by the Human Genome Variation Society (HGVS). Despite the widespread use of the standard, no freely available and comprehensive programming libraries are available. Here we report an open-source and easy-to-use Python library that facilitates the parsing, manipulation, formatting and validation of variants according to the HGVS specification. The current implementation focuses on the subset of the HGVS recommendations that precisely describe sequence-level variation relevant to the application of high-throughput sequencing to clinical diagnostics. The package is released under the Apache 2.0 open-source license. Source code, documentation and issue tracking are available at http://bitbucket.org/hgvs/hgvs/. Python packages are available at PyPI (https://pypi.python.org/pypi/hgvs). Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  5. BAIK– PROGRAMMING LANGUAGE BASED ON INDONESIAN LEXICAL PARSING FOR MULTITIER WEB DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Haris Hasanudin

    2012-05-01

    Full Text Available Business software development with global team is increasing rapidly and the programming language as development tool takes the important role in the global web development. The real user friendly programming language should be written in local language for programmer who has native language is not in English. This paper presents our design of BAIK (Bahasa Anak Indonesia untuk Komputerscripting language which syntax is modeled with Bahasa Indonesian for multitier web development. Researcher propose the implementation of Indonesian Parsing Engine and Binary Search Tree structure for memory allocation of variable and compose the language features that support basic Object Oriented Programming, Common Gateway Interface, HTML style manipulation and database connection. Our goal is to build real programming language from simple structure design for web development using Indonesian lexical words. Pengembangan bisnis perangkat lunak dalam tim berskala global meningkat dengan cepat dan bahasa pemrograman berperan penting dalam pengembangan web secara global. Bahasa pemrograman yang benar-benar ramah terhadap pengguna harus ditulis dalam bahasa lokal programmer yang bahasa ibunya bukan Bahasa Inggris. Paper ini menyajikan desain dari bahasa penulisan BAIK (Bahasa Anak Indonesia untuk Komputer, yang sintaksisnya dimodelkan dengan Bahasa Indonesia untuk pengembangan web multitier. Peneliti mengusulkan implementasi dari parsing engine Bahasa Indonesia dan struktur binary search tree untuk alokasi memori terhadap variabel, serta membuat fitur bahasa yang mendukung dasar pemrograman berbasis objek, common gateway interface, manipulasi gaya HTML, dan koneksi basis data. Tujuan penelitian ini adalah untuk menciptakan bahasa pemrograman yang sesungguhnya dan menggunakan desain struktur sederhana untuk pengembangan web dengan menggunakan kata-kata dari Bahasa Indonesia.

  6. Growth increments in teeth of Diictodon (Therapsida

    Directory of Open Access Journals (Sweden)

    J. Francis Thackeray

    1991-09-01

    Full Text Available Growth increments circa 0.02 mm in width have been observed in sectioned tusks of Diictodon from the Late Permian lower Beaufort succession of the South African Karoo, dated between about 260 and 245 million years ago. Mean growth increments show a decline from relatively high values in the Tropidostoma/Endothiodon Assemblage Zone, to lower values in the Aulacephalodon/Cistecephaluszone, declining still further in the Dicynodon lacerficeps/Whaitsia zone at the end of the Permian. These changes coincide with gradual changes in carbon isotope ratios measured from Diictodon tooth apatite. It is suggested that the decline in growth increments is related to environmental changes associated with a decline in primary production which contributed to the decline in abundance and ultimate extinction of Diictodon.

  7. Theory of Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Martins, P.A.F.; Bay, Niels; Skjødt, Martin

    2008-01-01

    This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in-plan......-plane contact friction and is focused on the extreme modes of deformation that are likely to be found in single point incremental forming processes. The overall investigation is supported by experimental work performed by the authors and data retrieved from the literature.......This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in...

  8. Análisis de los posibles desde la teoría de Parse en una persona con Alzheimer The analysis of the possibles through Parse's theory on a person with Alzheimer

    Directory of Open Access Journals (Sweden)

    Virtudes Rodero-Sánchez

    2006-11-01

    Full Text Available Cuando una persona acude al sistema sanitario con un problema de salud, con mucha frecuencia se le va a pedir que introduzca cambios en sus hábitos y estilo de vida. Esta demanda se suele concretar en un pacto-compromiso que se establece persona-profesional. Hemos observado que este pacto, a pesar de que el profesional se esfuerza en enmarcarlo en objetivos realistas, con demasiada frecuencia sobreviene la frustración, sobre todo en escenarios de cronicidad. La teoría de Parse nos ofrece una manera diferente de abordar el cambio. En la teoría de Parse, El Ser Humano en Devenir, los posibles son la expresión de la fuerza, entendida como una manera única de transformación, que consiste en avanzar con las esperanzas, anhelos y los proyectos de la persona. Planteamos: en primer lugar un análisis de los elementos de lo que Parse llama su tercer principio, la co-trancendencia con los posibles; en segundo lugar el análisis de los posibles desde este marco de referencia a través de una narrativa; y por último la práctica enfermera.When a person comes to the Health Care System with a health problem will often be asked to change some of his habits and lifestyles. This demand becomes a compromise-pact between the person and the professional. We have observed that in this compromise-pact, despite the effort of the professional to hide it behind realist targets, the patient usually becomes frustrated, especially in cases of chronic illnesses. Parse's theory offers us a different way to approach the change. In Parse's theory, The Human Becoming, the possibles are the expression of power, understood as a unique way of transformation, consisting in advancing with the hopes, desires and projects of a person. We suggest, first of all, an analysis of the elements that Parse calls her third principle: co-transcendence with the possibles; secondly, the analysis of the possibles from the basis of this reference framework through a narration and, finally

  9. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  10. Incremental Integrity Checking: Limitations and Possibilities

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2005-01-01

    Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...... to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...

  11. History Matters: Incremental Ontology Reasoning Using Modules

    Science.gov (United States)

    Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny

    The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.

  12. Existing School Buildings: Incremental Seismic Retrofit Opportunities.

    Science.gov (United States)

    Federal Emergency Management Agency, Washington, DC.

    The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…

  13. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  14. The Cognitive Underpinnings of Incremental Rehearsal

    Science.gov (United States)

    Varma, Sashank; Schleisman, Katrina B.

    2014-01-01

    Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…

  15. Guide to LIBXSIF, a Library for Parsing the Extended Standard Input Format of Accelerated Beamlines(LCC-0060)

    International Nuclear Information System (INIS)

    Tenenbaum, P

    2003-01-01

    We describe LIBXSIF, a standalone library for parsing the Extended Standard Input Format of accelerator beamlines. Included in the description are: documentation of user commands; full description of permitted accelerator elements and their attributes; the construction of beamline lists; the mechanics of adding LIBXSIF to an existing program; and ''under the hood'' details for users who wish to modify the library or are merely morbidly curious

  16. A Pipelining Implementation for Parsing X-ray Diffraction Source Data and Removing the Background Noise

    International Nuclear Information System (INIS)

    Bauer, Michael A; Biem, Alain; McIntyre, Stewart; Xie Yuzhen

    2010-01-01

    Synchrotrons can be used to generate X-rays in order to probe materials at the atomic level. One approach is to use X-ray diffraction (XRD) to do this. The data from an XRD experiment consists of a sequence of digital image files which for a single scan could consist of hundreds or even thousands of digital images. Existing analysis software processes these images individually sequentially and is usually used after the experiment is completed. The results from an XRD detector can be thought of as a sequence of images, generated during the scan by the X-ray beam. If these images could be analyzed in near real-time, the results could be sent to the researcher running the experiment and used to improve the overall experimental process and results. In this paper, we report on a stream processing application to remove background from XRD images using a pipelining implementation. We describe our implementation techniques of using IBM Infosphere Streams for parsing XRD source data and removing the background. We present experimental results showing the super-linear speedup attained over a purely sequential version of the algorithm on a quad-core machine. These results demonstrate the potential of making good use of multi-cores for high-performance stream processing of XRD images.

  17. Parsing pyrogenic polycyclic aromatic hydrocarbons: forensic chemistry, receptor models, and source control policy.

    Science.gov (United States)

    O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D

    2014-04-01

    A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.

  18. Parsing the Dictionary of Modern Literary Russian Language with the Method of SCD Configurations. The Lexicographic Modeling

    Directory of Open Access Journals (Sweden)

    Neculai Curteanu

    2012-05-01

    Full Text Available This paper extends the experience of parsing other five, sensibly different, Romanian, French, and German largest dictionaries, to \\textbf{\\textit{DMLRL}} (Dictionary of Modern Literary Russian Language [18], using the optimal and portable parsing method of SCD (Segmentation-Cohesion-Dependency configurations [7], [11], [15]. The purpose of the present paper is to elaborate the lexicographic modeling of \\textbf{\\textit{DMLRL}}, which necessarily precedes the sense tree parsing dictionary entries. The following \\textbf{\\textit{three}} SCD configurations are described: the \\textbf{\\textit{first one}} has to separate the lexicographic segments in a \\textbf{\\textit{DMLRL}} entry, the \\textbf{\\textit{second}} SCD-configuration concentrates on the SCD marker classes and their hypergraph hierarchy for \\textbf{\\textit{DMLRL}} primary and secondary senses, while the \\textbf{\\textit{third}} SCD configuration hands down the same modeling process to the atomic sense definitions and their examples-to-definitions. The dependency hypergraph of the third SCD configuration, interconnected to the one of the second SCD configuration, is specified completely at the atomic sense level for the first time, exceeding the SCD configuration modeling for other five dictionaries [15], [14]. Numerous examples from \\textbf{\\textit{DMLRL}} and comparison to \\textbf{\\textit{DLR-DAR}} Romanian thesaurus-dictionary support the proposed \\textbf{\\textit{DMLRL}} lexicographic modeling.

  19. Statistics of wind direction and its increments

    International Nuclear Information System (INIS)

    Doorn, Eric van; Dhruva, Brindesh; Sreenivasan, Katepalli R.; Cassella, Victor

    2000-01-01

    We study some elementary statistics of wind direction fluctuations in the atmosphere for a wide range of time scales (10 -4 sec to 1 h), and in both vertical and horizontal planes. In the plane parallel to the ground surface, the direction time series consists of two parts: a constant drift due to large weather systems moving with the mean wind speed, and fluctuations about this drift. The statistics of the direction fluctuations show a rough similarity to Brownian motion but depend, in detail, on the wind speed. This dependence manifests itself quite clearly in the statistics of wind-direction increments over various intervals of time. These increments are intermittent during periods of low wind speeds but Gaussian-like during periods of high wind speeds. (c) 2000 American Institute of Physics

  20. Evolving effective incremental SAT solvers with GP

    OpenAIRE

    Bader, Mohamed; Poli, R.

    2008-01-01

    Hyper-Heuristics could simply be defined as heuristics to choose other heuristics, and it is a way of combining existing heuristics to generate new ones. In a Hyper-Heuristic framework, the framework is used for evolving effective incremental (Inc*) solvers for SAT. We test the evolved heuristics (IncHH) against other known local search heuristics on a variety of benchmark SAT problems.

  1. Teraflop-scale Incremental Machine Learning

    OpenAIRE

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  2. Shakedown analysis by finite element incremental procedures

    International Nuclear Information System (INIS)

    Borkowski, A.; Kleiber, M.

    1979-01-01

    It is a common occurence in many practical problems that external loads are variable and the exact time-dependent history of loading is unknown. Instead of it load is characterized by a given loading domain: a convex polyhedron in the n-dimensional space of load parameters. The problem is then to check whether a structure shakes down, i.e. responds elastically after a few elasto-plastic cycles, or not to a variable loading as defined above. Such check can be performed by an incremental procedure. One should reproduce incrementally a simple cyclic process which consists of proportional load paths that connect the origin of the load space with the corners of the loading domain. It was proved that if a structure shakes down to such loading history then it is able to adopt itself to an arbitrary load path contained in the loading domain. The main advantage of such approach is the possibility to use existing incremental finite-element computer codes. (orig.)

  3. Simulation and comparison of perturb and observe and incremental ...

    Indian Academy of Sciences (India)

    Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. ... Keywords. Solar array; insolation; MPPT; modelling, P & O; incremental conductance. 1. .... voltage level. It is also ..... Int. J. Advances in Eng. Technol. 133–148.

  4. 48 CFR 3432.771 - Provision for incremental funding.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Provision for incremental funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION..., Incremental Funding, in a solicitation if a cost-reimbursement contract using incremental funding is...

  5. Enabling Incremental Query Re-Optimization.

    Science.gov (United States)

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  6. Incremental learning for automated knowledge capture

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  7. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  8. [Incremental cost effectiveness of multifocal cataract surgery].

    Science.gov (United States)

    Pagel, N; Dick, H B; Krummenauer, F

    2007-02-01

    Supplementation of cataract patients with multifocal intraocular lenses involves an additional financial investment when compared to the corresponding monofocal supplementation, which usually is not funded by German health care insurers. In the context of recent resource allocation discussions, however, the cost effectiveness of multifocal cataract surgery could become an important rationale. Therefore an evidence-based estimation of its cost effectiveness was carried out. Three independent meta-analyses were implemented to estimate the gain in uncorrected near visual acuity and best corrected visual acuity (vision lines) as well as the predictability (fraction of patients without need for reading aids) of multifocal supplementation. Study reports published between 1995 and 2004 (English or German language) were screened for appropriate key words. Meta effects in visual gain and predictability were estimated by means and standard deviations of the reported effect measures. Cost data were estimated by German DRG rates and individual lens costs; the cost effectiveness of multifocal cataract surgery was then computed in terms of its marginal cost effectiveness ratio (MCER) for each clinical benefit endpoint; the incremental costs of multifocal versus monofocal cataract surgery were further estimated by means of their respective incremental cost effectiveness ratio (ICER). An independent meta-analysis estimated the complication profiles to be expected after monofocal and multifocal cataract surgery in order to evaluate expectable complication-associated additional costs of both procedures; the marginal and incremental cost effectiveness estimates were adjusted accordingly. A sensitivity analysis comprised cost variations of +/- 10 % and utility variations alongside the meta effect estimate's 95 % confidence intervals. Total direct costs from the health care insurer's perspective were estimated 3363 euro, associated with a visual meta benefit in best corrected visual

  9. Thermo-msf-parser: an open source Java library to parse and visualize Thermo Proteome Discoverer msf files.

    Science.gov (United States)

    Colaert, Niklaas; Barsnes, Harald; Vaudel, Marc; Helsens, Kenny; Timmerman, Evy; Sickmann, Albert; Gevaert, Kris; Martens, Lennart

    2011-08-05

    The Thermo Proteome Discoverer program integrates both peptide identification and quantification into a single workflow for peptide-centric proteomics. Furthermore, its close integration with Thermo mass spectrometers has made it increasingly popular in the field. Here, we present a Java library to parse the msf files that constitute the output of Proteome Discoverer. The parser is also implemented as a graphical user interface allowing convenient access to the information found in the msf files, and in Rover, a program to analyze and validate quantitative proteomics information. All code, binaries, and documentation is freely available at http://thermo-msf-parser.googlecode.com.

  10. KAMUS BAHASA ARAB – INDONESIA ONLINE DENGAN PEMECAHAN SUKU KATA MENGGUNAKAN METODE PARSING

    Directory of Open Access Journals (Sweden)

    Anny Yuniarti

    2004-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Kebutuhan umat Islam akan fasilitas penunjang belajar bahasa Arab di Indonesia masih belum terpenuhi dengan optimal. Kamus bahasa Arab yang beredar di pasaran sulit dipahami karena minimnya pengetahuan tentang ilmu tata bahasa Arab di kalangan umat Islam. Pada penelitian ini dikembangkan sebuah perangkat lunak yang berfungsi menerjemahkan kata berbahasa Arab dengan metode parsing sehingga dapat mencakup kata-kata yang telah mengalami perubahan bentuk dari bentuk dasarnya. Karena kata bahasa Arab memiliki turunan kata yang jumlahnya cukup besar, dan supaya kamus efisien, maka tidak semua turunan kata disimpan dalam basisdata. Oleh sebab itu diperlukan suatu cara untuk mengenali pola kata, dan cara mengetahui bentuk dasar suatu kata. Keseluruhan perangkat lunak ini diimplementasikan berbasis web sehingga memudahkan pengaksesan pengguna. Dan pengguna tidak memerlukan proses instalasi perangkat lunak atau sistem operasi tertentu. Pembuatan perangkat lunak ini didahului dengan perancangan proses dan perancangan interface. Kemudian rancangan tersebut diimplementasikan menjadi sebuah perangkat lunak yang siap untuk dipakai. Perangkat lunak yang sudah jadi tersebut telah diuji coba sesuai dengan spesifikasi kebutuhan

  11. Pseudocode Interpreter (Pseudocode Integrated Development Environment with Lexical Analyzer and Syntax Analyzer using Recursive Descent Parsing Algorithm

    Directory of Open Access Journals (Sweden)

    Christian Lester D. Gimeno

    2017-11-01

    Full Text Available –This research study focused on the development of a software that helps students design, write, validate and run their pseudocode in a semi Integrated Development Environment (IDE instead of manually writing it on a piece of paper.Specifically, the study aimed to develop lexical analyzer or lexer, syntax analyzer or parser using recursive descent parsing algorithm and an interpreter. The lexical analyzer reads pseudocodesource in a sequence of symbols or characters as lexemes.The lexemes are then analyzed by the lexer that matches a pattern for valid tokens and passes to the syntax analyzer or parser. The syntax analyzer or parser takes those valid tokens and builds meaningful commands using recursive descent parsing algorithm in a form of an abstract syntax tree. The generation of an abstract syntax tree is based on the specified grammar rule created by the researcher expressed in Extended Backus-Naur Form. The Interpreter takes the generated abstract syntax tree and starts the evaluation or interpretation to produce pseudocode output. The software was evaluated using white-box testing by several ICT professionals and black-box testing by several computer science students based on the International Organization for Standardization (ISO 9126 software quality standards. The overall results of the evaluation both for white-box and black-box were described as “Excellent in terms of functionality, reliability, usability, efficiency, maintainability and portability”.

  12. The effect of recognizability on figure-ground processing: does it affect parsing or only figure selection?

    Science.gov (United States)

    Navon, David

    2011-03-01

    Though figure-ground assignment has been shown to be probably affected by recognizability, it appears sensible that object recognition must follow at least the earlier process of figure-ground segregation. To examine whether or not rudimentary object recognition could, counterintuitively, start even before the completion of the stage of parsing in which figure-ground segregation is done, participants were asked to respond, in a go/no-go fashion, whenever any out of 16 alternative connected patterns (that constituted familiar stimuli in the upright orientation) appeared. The white figure of the to-be-attended stimulus-target or foil-could be segregated from the white ambient ground only by means of a frame surrounding it. Such a frame was absent until the onset of target display. Then, to manipulate organizational quality, the greyness of the frame was either gradually increased from zero (in Experiment 1) or changed abruptly to a stationary level whose greyness was varied between trials (in Experiments 2 and 3). Stimulus recognizability was manipulated by orientation angle. In all three experiments the effect of recognizability was found to be considerably larger when organizational quality was minimal due to an extremely faint frame. This result is argued to be incompatible with any version of a serial thesis suggesting that processing aimed at object recognition starts only with a good enough level of organizational quality. The experiments rather provide some support to the claim, termed here "early interaction hypothesis", positing interaction between early recognition processing and preassignment parsing processes.

  13. Incremental fold tests of remagnetized carbonate rocks

    Science.gov (United States)

    Van Der Voo, R.; van der Pluijm, B.

    2017-12-01

    Many unmetamorphosed carbonates all over the world are demonstrably remagnetized, with the age of the secondary magnetizations typically close to that of the nearest orogeny in space and time. This observation did not become compelling until the mid-1980's, when the incremental fold test revealed the Appalachian carbonates to carry a syn-deformational remanence of likely Permian age (Scotese et al., 1982, Phys. Earth Planet. Int., v. 30, p. 385-395; Cederquist et al., 2006, Tectonophysics v. 422, p. 41-54). Since that time scores of Appalachian and Rocky Mountain carbonate rocks have added results to the growing database of paleopoles representing remagnetizations. Late Paleozoic remagnetizations form a cloud of results surrounding the reference poles of the Laurentian APWP. Remagnetizations in other locales and with inferred ages coeval with regional orogenies (e.g., Taconic, Sevier/Laramide, Variscan, Indosinian) are also ubiquitous. To be able to transform this cornucopia into valuable anchor-points on the APWP would be highly desirable. This may indeed become feasible, as will be explained next. Recent studies of faulted and folded carbonate-shale sequences have shown that this deformation enhances the illitization of smectite (Haines & van der Pluijm, 2008, Jour. Struct. Geol., v. 30, p. 525-538; Fitz-Diaz et al., 2014, International Geol. Review, v. 56, p. 734-755). 39Ar-40Ar dating of the authigenic illite (neutralizing any detrital illite contribution by taking the intercept of a mixing line) yields, therefore, the age of the deformation. We know that this date is also the age of the syndeformational remanence; thus we have the age of the corresponding paleopole. Results so far are obtained for the Canadian and U.S. Rocky Mountains and for the Spanish Cantabrian carbonates (Tohver et al., 2008, Earth Planet. Sci. Lett., v. 274, p. 524-530) and make good sense in accord with geological knowledge. Incremental fold tests are the tools used for this

  14. Incremental passivity and output regulation for switched nonlinear systems

    Science.gov (United States)

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  15. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  16. Towards a multiconfigurational method of increments

    Science.gov (United States)

    Fertitta, E.; Koch, D.; Paulus, B.; Barcza, G.; Legeza, Ö.

    2018-06-01

    The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localised and used for the many-body expansion. In situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot be guaranteed to yield proper results either. Herein, we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such an approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density matrix renormalisation group benchmarks. We tested this two-state constant-coupling MoI on beryllium rings of different sizes and studied the error introduced by the constant coupling.

  17. Natural Gas pipelines: economics of incremental capacity

    International Nuclear Information System (INIS)

    Kimber, M.

    2000-01-01

    A number of gas transmission pipeline systems in Australia exhibit capacity constraints, and yet there is little evidence of creative or innovative processes from either the service provides of the regulators which might provide a market-based response to these constraints. There is no provision in the Code in its current form to allow it to accommodate these processes. This aspect is one of many that require review to make the Code work. It is unlikely that the current members of the National Gas Pipeline Advisory Committee (NGPAC) or its advisers have sufficient understanding of the analysis of risk and the consequential commercial drivers to implement the necessary changes. As a result, the Code will increasingly lose touch with the commercial realities of the energy market and will continue to inhibit investment in new and expanded infrastructure where market risk is present. The recent report prepared for the Business Council of Australia indicates a need to re-vitalise the energy reform process. It is important for the Australian energy industry to provide leadership and advice to governments to continue the process of reform, and, in particular, to amend the Code to make it more relevant. These amendments must include a mechanism by which price signals can be generated to provide timely and effective information for existing service providers or new entrants to install incremental pipeline capacity

  18. Evolution of cooperation driven by incremental learning

    Science.gov (United States)

    Li, Pei; Duan, Haibin

    2015-02-01

    It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.

  19. Power variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  20. Parsing in a Dynamical System: An Attractor-Based Account of the Interaction of Lexical and Structural Constraints in Sentence Processing.

    Science.gov (United States)

    Tabor, Whitney; And Others

    1997-01-01

    Proposes a dynamical systems approach to parsing in which syntactic hypotheses are associated with attractors in a metric space. The experiments discussed documented various contingent frequency effects that cut across traditional linguistic grains, each of which was predicted by the dynamical systems model. (47 references) (Author/CK)

  1. One Step at a Time: SBM as an Incremental Process.

    Science.gov (United States)

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  2. Defense Agencies Initiative Increment 2 (DAI Inc 2)

    Science.gov (United States)

    2016-03-01

    module. In an ADM dated September 23, 2013, the MDA established Increment 2 as a MAIS program to include budget formulation; grants financial...2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then

  3. Incrementality in naming and reading complex numerals: Evidence from eyetracking

    NARCIS (Netherlands)

    Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.

    2006-01-01

    Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported

  4. Lifetime costs of lung transplantation : Estimation of incremental costs

    NARCIS (Netherlands)

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung

  5. Finance for incremental housing: current status and prospects for expansion

    NARCIS (Netherlands)

    Ferguson, B.; Smets, P.G.S.M.

    2010-01-01

    Appropriate finance can greatly increase the speed and lower the cost of incremental housing - the process used by much of the low/moderate-income majority of most developing countries to acquire shelter. Informal finance continues to dominate the funding of incremental housing. However, new sources

  6. Validation of the periodicity of growth increment deposition in ...

    African Journals Online (AJOL)

    Validation of the periodicity of growth increment deposition in otoliths from the larval and early juvenile stages of two cyprinids from the Orange–Vaal river ... Linear regression models were fitted to the known age post-fertilisation and the age estimated using increment counts to test the correspondence between the two for ...

  7. 76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2011-11-29

    ... Benefits Business Transformation, Increment I, 76 FR 53764 (Aug. 29, 2011). The final rule removed form... [CIS No. 2481-09; Docket No. USCIS-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final...

  8. 76 FR 53763 - Immigration Benefits Business Transformation, Increment I

    Science.gov (United States)

    2011-08-29

    ..., 100, et al. Immigration Benefits Business Transformation, Increment I; Final Rule #0;#0;Federal... Benefits Business Transformation, Increment I AGENCY: U.S. Citizenship and Immigration Services, DHS... USCIS is engaged in an enterprise-wide transformation effort to implement new business processes and to...

  9. The Time Course of Incremental Word Processing during Chinese Reading

    Science.gov (United States)

    Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus

    2018-01-01

    In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…

  10. Creating Helical Tool Paths for Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Hancock, Michael H.; Bay, Niels

    2007-01-01

    Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...

  11. On conditional scalar increment and joint velocity-scalar increment statistics

    International Nuclear Information System (INIS)

    Zhang Hengbin; Wang Danhong; Tong Chenning

    2004-01-01

    Conditional velocity and scalar increment statistics are usually studied in the context of Kolmogorov's refined similarity hypotheses and are considered universal (quasi-Gaussian) for inertial-range separations. In such analyses the locally averaged energy and scalar dissipation rates are used as conditioning variables. Recent studies have shown that certain local turbulence structures can be captured when the local scalar variance (φ 2 ) r and the local kinetic energy k r are used as the conditioning variables. We study the conditional increments using these conditioning variables, which also provide the local turbulence scales. Experimental data obtained in the fully developed region of an axisymmetric turbulent jet are used to compute the statistics. The conditional scalar increment probability density function (PDF) conditional on (φ 2 ) r is found to be close to Gaussian for (φ 2 ) r small compared with its mean and is sub-Gaussian and bimodal for large (φ 2 ) r , and therefore is not universal. We find that the different shapes of the conditional PDFs are related to the instantaneous degree of non-equilibrium (production larger than dissipation) of the local scalar. There is further evidence of this from the conditional PDF conditional on both (φ 2 ) r and χ r , which is largely a function of (φ 2 ) r /χ r , a measure of the degree of non-equilibrium. The velocity-scalar increment joint PDF is close to joint Gaussian and quad-modal for equilibrium and non-equilibrium local velocity and scalar, respectively. The latter shape is associated with a combination of the ramp-cliff and plane strain structures. Kolmogorov's refined similarity hypotheses also predict a dependence of the conditional PDF on the degree of non-equilibrium. Therefore, the quasi-Gaussian (joint) PDF, previously observed in the context of Kolmogorov's refined similarity hypotheses, is only one of the conditional PDF shapes of inertial range turbulence. The present study suggests that

  12. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    Science.gov (United States)

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  13. Evaluation of Efficient XML Interchange (EXI) for Large Datasets and as an Alternative to Binary JSON Encodings

    Science.gov (United States)

    2015-03-01

    1977 Algorithm LZMA Lempel-Ziv Markov-chain Algorithm MB Megabyte NCW Network-Centric Warfare NoSQL Not only Structured Query Language NOW Network...Language ( NoSQL ) database (MongoDB Documentation Project, 2014). Its stated design goals are to be lightweight or compact, quickly traversed by

  14. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  15. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  16. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek; Skiadopoulos, Spiros; Kalnis, Panos

    2017-01-01

    : they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving

  17. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab; Canim, Mustafa; Sadoghi, Mohammad; Bhatta, Bishwaranjan; Chang, Yuan-Chi; Kalnis, Panos

    2017-01-01

    , such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem

  18. Increment and mortality in a virgin Douglas-fir forest.

    Science.gov (United States)

    Robert W. Steele; Norman P. Worthington

    1955-01-01

    Is there any basis to the forester's rule of thumb that virgin forests eventually reach an equilibrium where increment and mortality approximately balance? Are we wasting potential timber volume by failing to salvage mortality in old-growth stands?

  19. Mission Planning System Increment 5 (MPS Inc 5)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Mission Planning System Increment 5 (MPS Inc 5) Defense Acquisition Management Information...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then Year...Phone: 845-9625 DSN Fax: Date Assigned: May 19, 2014 Program Information Program Name Mission Planning System Increment 5 (MPS Inc 5) DoD

  20. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  1. On the instability increments of a stationary pinch

    International Nuclear Information System (INIS)

    Bud'ko, A.B.

    1989-01-01

    The stability of stationary pinch to helical modes is numerically studied. It is shown that in the case of a rather fast plasma pressure decrease to the pinch boundary, for example, for an isothermal diffusion pinch with Gauss density distribution instabilities with m=0 modes are the most quickly growing. Instability increments are calculated. A simple analytical expression of a maximum increment of growth of sausage instability for automodel Gauss profiles is obtained

  2. Biometrics Enabling Capability Increment 1 (BEC Inc 1)

    Science.gov (United States)

    2016-03-01

    modal biometrics submissions to include iris, face, palm and finger prints from biometrics collection devices, which will support the Warfighter in...2016 Major Automated Information System Annual Report Biometrics Enabling Capability Increment 1 (BEC Inc 1) Defense Acquisition Management...Phone: 227-3119 DSN Fax: Date Assigned: July 15, 2015 Program Information Program Name Biometrics Enabling Capability Increment 1 (BEC Inc 1) DoD

  3. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    OpenAIRE

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists) and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution err...

  4. A Syntactic-Semantic Approach to Incremental Verification

    OpenAIRE

    Bianculli, Domenico; Filieri, Antonio; Ghezzi, Carlo; Mandrioli, Dino

    2013-01-01

    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework...

  5. Logistics Modernization Program Increment 2 (LMP Inc 2)

    Science.gov (United States)

    2016-03-01

    Sections 3 and 4 of the LMP Increment 2 Business Case, ADM), key functional requirements, Critical Design Review (CDR) Reports, and Economic ...from the 2013 version of the LMP Increment 2 Economic Analysis and replace it with references to the Economic Analysis that will be completed...of ( inbound /outbound) IDOCs into the system. LMP must be able to successfully process 95% of ( inbound /outbound) IDOCs into the system. Will meet

  6. Organization Strategy and Structural Differences for Radical Versus Incremental Innovation

    OpenAIRE

    John E. Ettlie; William P. Bridges; Robert D. O'Keefe

    1984-01-01

    The purpose of this study was to test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation. That is, unique strategy and structure will be required for radical innovation, especially process adoption, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption. This differentiated theory is strongly supported ...

  7. Incremental Learning for Place Recognition in Dynamic Environments

    OpenAIRE

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara; Jensfelt, Patric

    2007-01-01

    This paper proposes a discriminative approach to template-based Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which ...

  8. MRI: Modular reasoning about interference in incremental programming

    OpenAIRE

    Oliveira, Bruno C. D. S; Schrijvers, Tom; Cook, William R

    2012-01-01

    Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include: Object-oriented programming (OOP) inheritance, aspect-oriented programming (AOP) advice and feature-oriented programming (FOP). A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sh...

  9. Incremental short daily home hemodialysis: a case series

    OpenAIRE

    Toth-Manikowski, Stephanie M.; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-01-01

    Background Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients? residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. Case presentation From 2011 to 2015, we initiated 5 incident hemodialysis patients on an ...

  10. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.

  11. Entity versus incremental theories predict older adults' memory performance.

    Science.gov (United States)

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. Incremental short daily home hemodialysis: a case series.

    Science.gov (United States)

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  13. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Sebastián L. [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Liu, Er; Arvind, Varun [Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States); Bushman, Jared [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); School of Pharmacy, University of Wyoming, Laramie, WY (United States); Sung, Hak-Joon [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Department of Biomedical Engineering, Vanderbilt University, Nashville, TN (United States); Becker, Matthew L. [Department of Polymer Science and Engineering, University of Akron, Akron, OH (United States); Lelièvre, Sophie [Department of Basic Medical Sciences, Purdue University, West Lafayette, IN (United States); Kohn, Joachim [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Vidi, Pierre-Alexandre, E-mail: pvidi@wakehealth.edu [Department of Cancer Biology, Wake Forest School of Medicine, Winston-Salem, NC (United States); Moghe, Prabhas V., E-mail: moghe@rutgers.edu [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States)

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  14. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    International Nuclear Information System (INIS)

    Vega, Sebastián L.; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L.; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V.

    2017-01-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  15. Conceptual plural information is used to guide early parsing decisions: Evidence from garden-path sentences with reciprocal verbs.

    Science.gov (United States)

    Patson, Nikole D; Ferreira, Fernanda

    2009-05-01

    In three eyetracking studies, we investigated the role of conceptual plurality in initial parsing decisions in temporarily ambiguous sentences with reciprocal verbs (e.g., While the lovers kissed the baby played alone). We varied the subject of the first clause using three types of plural noun phrases: conjoined noun phrases (the bride and the groom), plural definite descriptions (the lovers), and numerically quantified noun phrases (the two lovers). We found no evidence for garden-path effects when the subject was conjoined (Ferreira & McClure, 1997), but traditional garden-path effects were found with the other plural noun phrases. In addition, we tested plural anaphors that had a plural antecedent present in the discourse. We found that when the antecedent was conjoined, garden-path effects were absent compared to cases in which the antecedent was a plural definite description. Our results indicate that the parser is sensitive to the conceptual representation of a plural constituent. In particular, it appears that a Complex Reference Object (Moxey et al., 2004) automatically activates a reciprocal reading of a reciprocal verb.

  16. SmilesDrawer: Parsing and Drawing SMILES-Encoded Molecular Structures Using Client-Side JavaScript.

    Science.gov (United States)

    Probst, Daniel; Reymond, Jean-Louis

    2018-01-22

    Here we present SmilesDrawer, a dependency-free JavaScript component capable of both parsing and drawing SMILES-encoded molecular structures client-side, developed to be easily integrated into web projects and to display organic molecules in large numbers and fast succession. SmilesDrawer can draw structurally and stereochemically complex structures such as maitotoxin and C 60 without using templates, yet has an exceptionally small computational footprint and low memory usage without the requirement for loading images or any other form of client-server communication, making it easy to integrate even in secure (intranet, firewalled) or offline applications. These features allow the rendering of thousands of molecular structure drawings on a single web page within seconds on a wide range of hardware supporting modern browsers. The source code as well as the most recent build of SmilesDrawer is available on Github ( http://doc.gdb.tools/smilesDrawer/ ). Both yarn and npm packages are also available.

  17. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation.

    Science.gov (United States)

    Vega, Sebastián L; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative "imaging-derived" parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. The interaction of parsing rules and argument – Predicate constructions: implications for the structure of the Grammaticon in FunGramKB

    Directory of Open Access Journals (Sweden)

    María del Carmen Fumero Pérez

    2017-07-01

    Full Text Available The Functional Grammar Knowledge Base (FunGramKB, (Periñán-Pascual and Arcas-Túnez 2010 is a multipurpose lexico-conceptual knowledge base designed to be used in different Natural Language Processing (NLP tasks. It is complemented with the ARTEMIS (Automatically Representing Text Meaning via an Interlingua–based System application, a parsing device linguistically grounded on Role and Reference Grammar (RRG that transduces natural language fragments into their corresponding grammatical and semantic structures. This paper unveils the different phases involved in its parsing routine, paying special attention to the treatment of argumental constructions. As an illustrative case, we will follow all the steps necessary to effectively parse a For-Benefactive structure within ARTEMIS. This methodology will reveal the necessity to distinguish between Kernel constructs and L1-constructions, since the latter involve a modification of the lexical template of the verb. Our definition of L1-constructions leads to the reorganization of the catalogue of FunGramKB L1-constructions, formerly based on Levin’s (1993 alternations. Accordingly, a rearrangement of the internal configuration of the L1-Constructicon within the Grammaticon is proposed.

  19. Support vector machine incremental learning triggered by wrongly predicted samples

    Science.gov (United States)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  20. Three routes forward for biofuels: Incremental, leapfrog, and transitional

    International Nuclear Information System (INIS)

    Morrison, Geoff M.; Witcover, Julie; Parker, Nathan C.; Fulton, Lew

    2016-01-01

    This paper examines three technology routes for lowering the carbon intensity of biofuels: (1) a leapfrog route that focuses on major technological breakthroughs in lignocellulosic pathways at new, stand-alone biorefineries; (2) an incremental route in which improvements are made to existing U.S. corn ethanol and soybean biodiesel biorefineries; and (3) a transitional route in which biotechnology firms gain experience growing, handling, or chemically converting lignocellulosic biomass in a lower-risk fashion than leapfrog biorefineries by leveraging existing capital stock. We find the incremental route is likely to involve the largest production volumes and greenhouse gas benefits until at least the mid-2020s, but transitional and leapfrog biofuels together have far greater long-term potential. We estimate that the Renewable Fuel Standard, California's Low Carbon Fuel Standard, and federal tax credits provided an incentive of roughly $1.5–2.5 per gallon of leapfrog biofuel between 2012 and 2015, but that regulatory elements in these policies mostly incentivize lower-risk incremental investments. Adjustments in policy may be necessary to bring a greater focus on transitional technologies that provide targeted learning and cost reduction opportunities for leapfrog biofuels. - Highlights: • Three technological pathways are compared that lower carbon intensity of biofuels. • Incremental changes lead to faster greenhouse gas reductions. • Leapfrog changes lead to greatest long-term potential. • Two main biofuel policies (RFS and LCFS) are largely incremental in nature. • Transitional biofuels offer medium-risk, medium reward pathway.

  1. The Parsing Syllable Envelopes Test for Assessment of Amplitude Modulation Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

    Science.gov (United States)

    Cameron, Sharon; Chong-White, Nicky; Mealings, Kiri; Beechey, Tim; Dillon, Harvey; Young, Taegan

    2018-02-01

    Intensity peaks and valleys in the acoustic signal are salient cues to syllable structure, which is accepted to be a crucial early step in phonological processing. As such, the ability to detect low-rate (envelope) modulations in signal amplitude is essential to parse an incoming speech signal into smaller phonological units. The Parsing Syllable Envelopes (ParSE) test was developed to quantify the ability of children to recognize syllable boundaries using an amplitude modulation detection paradigm. The envelope of a 750-msec steady-state /a/ vowel is modulated into two or three pseudo-syllables using notches with modulation depths varying between 0% and 100% along an 11-step continuum. In an adaptive three-alternative forced-choice procedure, the participant identified whether one, two, or three pseudo-syllables were heard. Development of the ParSE stimuli and test protocols, and collection of normative and test-retest reliability data. Eleven adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 10 mo) and 134 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 72 females. Data were collected using a touchscreen computer. Psychometric functions (PFs) were automatically fit to individual data by the ParSE software. Performance was related to the modulation depth at which syllables can be detected with 88% accuracy (referred to as the upper boundary of the uncertainty region [UBUR]). A shallower PF slope reflected a greater level of uncertainty. Age effects were determined based on raw scores. z Scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UBUR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance. Across participants, the performance criterion (UBUR) was met with a median modulation depth of 42%. The effect of age on the UBUR was

  2. Making context explicit for explanation and incremental knowledge acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Brezillon, P. [Univ. Paris (France)

    1996-12-31

    Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.

  3. Martingales, nonstationary increments, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  4. Single point incremental forming: Formability of PC sheets

    Science.gov (United States)

    Formisano, A.; Boccarusso, L.; Carrino, L.; Lambiase, F.; Minutolo, F. Memola Capece

    2018-05-01

    Recent research on Single Point Incremental Forming of polymers has slightly covered the possibility of expanding the materials capability window of this flexible forming process beyond metals, by demonstrating the workability of thermoplastic polymers at room temperature. Given the different behaviour of polymers compared to metals, different aspects need to be deepened to better understand the behaviour of these materials when incrementally formed. Thus, the aim of the work is to investigate the formability of incrementally formed polycarbonate thin sheets. To this end, an experimental investigation at room temperature was conducted involving formability tests; varying wall angle cone and pyramid frusta were manufactured by processing polycarbonate sheets with different thicknesses and using tools with different diameters, in order to draw conclusions on the formability of polymer sheets through the evaluation of the forming angles and the observation of the failure mechanisms.

  5. Motion-Induced Blindness Using Increments and Decrements of Luminance

    Directory of Open Access Journals (Sweden)

    Stine Wm Wren

    2017-10-01

    Full Text Available Motion-induced blindness describes the disappearance of stationary elements of a scene when other, perhaps non-overlapping, elements of the scene are in motion. We measured the effects of increment (200.0 cd/m2 and decrement targets (15.0 cd/m2 and masks presented on a grey background (108.0 cd/m2, tapping into putative ON- and OFF-channels, on the rate of target disappearance psychophysically. We presented two-frame motion, which has coherent motion energy, and dynamic Glass patterns and dynamic anti-Glass patterns, which do not have coherent motion energy. Using the method of constant stimuli, participants viewed stimuli of varying durations (3.1 s, 4.6 s, 7.0 s, 11 s, or 16 s in a given trial and then indicated whether or not the targets vanished during that trial. Psychometric function midpoints were used to define absolute threshold mask duration for the disappearance of the target. 95% confidence intervals for threshold disappearance times were estimated using a bootstrap technique for each of the participants across two experiments. Decrement masks were more effective than increment masks with increment targets. Increment targets were easier to mask than decrement targets. Distinct mask pattern types had no effect, suggesting that perceived coherence contributes to the effectiveness of the mask. The ON/OFF dichotomy clearly carries its influence to the level of perceived motion coherence. Further, the asymmetry in the effects of increment and decrement masks on increment and decrement targets might lead one to speculate that they reflect the ‘importance’ of detecting decrements in the environment.

  6. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors.

    Science.gov (United States)

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual's processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people's moral character is fixed (entity theorists) and individuals who hold the implicit belief that people's moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2-4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  7. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    Directory of Open Access Journals (Sweden)

    Niwen Huang

    2017-08-01

    Full Text Available Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE, rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2–4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  8. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  9. Apparatus for electrical-assisted incremental forming and process thereof

    Science.gov (United States)

    Roth, John; Cao, Jian

    2018-04-24

    A process and apparatus for forming a sheet metal component using an electric current passing through the component. The process can include providing an incremental forming machine, the machine having at least one arcuate tipped tool and at least electrode spaced a predetermined distance from the arcuate tipped tool. The machine is operable to perform a plurality of incremental deformations on the sheet metal component using the arcuate tipped tool. The machine is also operable to apply an electric direct current through the electrode into the sheet metal component at the predetermined distance from the arcuate tipped tool while the machine is forming the sheet metal component.

  10. Single-point incremental forming and formability-failure diagrams

    DEFF Research Database (Denmark)

    Silva, M.B.; Skjødt, Martin; Atkins, A.G.

    2008-01-01

    In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...

  11. Short-term load forecasting with increment regression tree

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)

  12. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    Science.gov (United States)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  13. Playing by the rules? Analysing incremental urban developments

    NARCIS (Netherlands)

    Karnenbeek, van Lilian; Janssen-Jansen, Leonie

    2018-01-01

    Current urban developments are often considered outdated and static, and the argument follows that they should become more adaptive. In this paper, we argue that existing urban development are already adaptive and incremental. Given this flexibility in urban development, understanding changes in the

  14. Size, Stability and Incremental Budgeting Outcomes in Public Universities.

    Science.gov (United States)

    Schick, Allen G.; Hills, Frederick S.

    1982-01-01

    Examined the influence of relative size in the analysis of total dollar and workforce budgets, and changes in total dollar and workforce budgets when correlational/regression methods are used. Data suggested that size dominates the analysis of total budgets, and is not a factor when discretionary dollar increments are analyzed. (JAC)

  15. The National Institute of Education and Incremental Budgeting.

    Science.gov (United States)

    Hastings, Anne H.

    1979-01-01

    The National Institute of Education's (NIE) history demonstrates that the relevant criteria for characterizing budgeting as incremental are not the predictability and stability of appropriations but the conditions of complexity, limited information, multiple factors, and imperfect agreement on ends; NIE's appropriations were dominated by political…

  16. Generation of Referring Expressions: Assessing the Incremental Algorithm

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  17. Object class hierarchy for an incremental hypertext editor

    Directory of Open Access Journals (Sweden)

    A. Colesnicov

    1995-02-01

    Full Text Available The object class hierarchy design is considered due to a hypertext editor implementation. The following basic classes were selected: the editor's coordinate system, the memory manager, the text buffer executing basic editing operations, the inherited hypertext buffer, the edit window, the multi-window shell. Special hypertext editing features, the incremental hypertext creation support and further generalizations are discussed.

  18. Bipower variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark

    2009-01-01

    Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...

  19. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu; Liu, Chuang; Yu, Lu; Zhang, Zi-Ke; Zhou, Tao

    2017-01-01

    success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt

  20. Revisiting the fundamentals of single point incremental forming by

    DEFF Research Database (Denmark)

    Silva, Beatriz; Skjødt, Martin; Martins, Paulo A.F.

    2008-01-01

    Knowledge of the physics behind the fracture of material at the transition between the inclined wall and the corner radius of the sheet is of great importance for understanding the fundamentals of single point incremental forming (SPIF). How the material fractures, what is the state of strain...

  1. Some theoretical aspects of capacity increment in gaseous diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Coates, J. H.; Guais, J. C.; Lamorlette, G.

    1975-09-01

    Facing to the sharply growing needs of enrichment services, the problem of implementing new capacities must be included in an optimized scheme spread out in time. In this paper the alternative solutions will be studied first for an unique increment decision, and then in an optimum schedule. The limits of the analysis will be discussed.

  2. Respiratory ammonia output and blood ammonia concentration during incremental exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Kort, E; van der Mark, TW; Grevink, RG; Verkerke, GJ

    The aim of this study was to investigate whether the increase of ammonia concentration and lactate concentration in blood was accompanied by an increased expiration of ammonia during graded exercise. Eleven healthy subjects performed an incremental cycle ergometer test. Blood ammonia, blood lactate

  3. Incremental concept learning with few training examples and hierarchical classification

    NARCIS (Netherlands)

    Bouma, H.; Eendebak, P.T.; Schutte, K.; Azzopardi, G.; Burghouts, G.J.

    2015-01-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible

  4. Factors for Radical Creativity, Incremental Creativity, and Routine, Noncreative Performance

    Science.gov (United States)

    Madjar, Nora; Greenberg, Ellen; Chen, Zheng

    2011-01-01

    This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine,…

  5. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  6. Incremental exercise test performance with and without a respiratory ...

    African Journals Online (AJOL)

    Incremental exercise test performance with and without a respiratory gas collection system. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... Industrial- type mask wear is thought to impair exercise performance through increased respiratory dead space, flow ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  7. 78 FR 22770 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2013-04-17

    ...-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY...: Background On August 29, 2011, DHS issued a final rule titled, Immigration Benefits Business Transformation... business processes. In this notice, we are correcting three technical errors. DATES: The effective date of...

  8. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  9. Incremental cryptography and security of public hash functions ...

    African Journals Online (AJOL)

    An investigation of incremental algorithms for crytographic functions was initiated. The problem, for collision-free hashing, is to design a scheme for which there exists an efficient “update” algorithm: this algorithm is given the hash function H, the hash h = H(M) of message M and the “replacement request” (j, m), and outputs ...

  10. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  11. Evidence combination for incremental decision-making processes

    NARCIS (Netherlands)

    Berrada, Ghita; van Keulen, Maurice; de Keijzer, Ander

    The establishment of a medical diagnosis is an incremental process highly fraught with uncertainty. At each step of this painstaking process, it may be beneficial to be able to quantify the uncertainty linked to the diagnosis and steadily update the uncertainty estimation using available sources of

  12. Geometry of finite deformations and time-incremental analysis

    Czech Academy of Sciences Publication Activity Database

    Fiala, Zdeněk

    2016-01-01

    Roč. 81, May (2016), s. 230-244 ISSN 0020-7462 Institutional support: RVO:68378297 Keywords : solid mechanics * finite deformations * time-incremental analysis * Lagrangian system * evolution equation of Lie type Subject RIV: BE - Theoretical Physics Impact factor: 2.074, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020746216000330

  13. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  14. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  15. Will Incremental Hemodialysis Preserve Residual Function and Improve Patient Survival?

    Science.gov (United States)

    Davenport, Andrew

    2015-01-01

    The progressive loss of residual renal function in peritoneal dialysis patients is associated with increased mortality. It has been suggested that incremental dialysis may help preserve residual renal function and improve patient survival. Residual renal function depends upon both patient related and dialysis associated factors. Maintaining patients in an over-hydrated state may be associated with better preservation of residual renal function but any benefit comes with a significant risk of cardiovascular consequences. Notably, it is only observational studies that have reported an association between dialysis patient survival and residual renal function; causality has not been established for dialysis patient survival. The tenuous connections between residual renal function and outcomes and between incremental hemodialysis and residual renal function should temper our enthusiasm for interventions in this area. PMID:25385441

  16. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  17. Table incremental slow injection CE-CT in lung cancer

    International Nuclear Information System (INIS)

    Yoshida, Shoji; Maeda, Tomoho; Morita, Masaru

    1988-01-01

    The purpose of this study is to evaluate tumor enhancement in lung cancer under the table incremental study with slow injection of contrast media. The early serial 8 sliced images during the slow injection (1.5 ml/sec) of contrant media were obtained. Following the early images, delayed 8 same sliced images were taken in 2 minutes later. Chacteristic enhanced patterns of the primary cancer and metastatic mediastinal lymphnode were recognized in this study. Enhancement of the primary lesion was classified in 4 patterns, irregular geographic pattern, heterogeneous pattern, homogeneous pattern and rim-enhanced pattern. In mediastinal metastatic lymphadenopathy, three enhanced patterns were obtained, heterogeneous, homogeneous and ring enhanced pattern. Some characteristic enhancement patterns according to the histopathological finding of the lung cancer were obtained. With using this incremental slow injection CE-CT, precise information about the relationship between lung cancer and adjacent mediastinal structure, and obvious staining patterns of the tumor and mediastinal lymphnode were recognized. (author)

  18. Final Safety Analysis Report (FSAR) for Building 332, Increment III

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N.; Toy, Jr., A. J.

    1977-08-31

    This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.

  19. Decoupled Simulation Method For Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  20. Incremental exposure facilitates adaptation to sensory rearrangement. [vestibular stimulation patterns

    Science.gov (United States)

    Lackner, J. R.; Lobovits, D. N.

    1978-01-01

    Visual-target pointing experiments were performed on 24 adult volunteers in order to compare the relative effectiveness of incremental (stepwise) and single-step exposure conditions on adaptation to visual rearrangement. The differences between the preexposure and postexposure scores served as an index of the adaptation elicited during the exposure period. It is found that both single-step and stepwise exposure to visual rearrangement elicit compensatory changes in sensorimotor coordination. However, stepwise exposure, when compared to single-step exposur in terms of the average magnitude of visual displacement over the exposure period, clearly enhances the rate of adaptation. It seems possible that the enhancement of adaptation to unusual patterns of sensory stimulation produced by incremental exposure reflects a general principle of sensorimotor function.

  1. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  2. Automobile sheet metal part production with incremental sheet forming

    Directory of Open Access Journals (Sweden)

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  3. On kinematical minimum principles for rates and increments in plasticity

    International Nuclear Information System (INIS)

    Zouain, N.

    1984-01-01

    The optimization approach for elastoplastic analysis is discussed showing that some minimum principles related to numerical methods can be derived by means of duality and penalization procedures. Three minimum principles for velocity and plastic multiplier rate fields are presented in the framework of perfect plasticity. The first one is the classical Greenberg formulation. The second one, due to Capurso, is developed here with different motivation, and modified by penalization of constraints so as to arrive at a third principle for rates. The counterparts of these optimization formulations in terms of discrete increments of displacements of displacements and plastic multipliers are discussed. The third one of these minimum principles for finite increments is recognized to be closely related to Maier's formulation of holonomic plasticity. (Author) [pt

  4. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  5. Fault-tolerant incremental diagnosis with limited historical data

    OpenAIRE

    Gillblad, Daniel; Holst, Anders; Steinert, Rebecca

    2006-01-01

    In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. All relevant information may not be available initially and must be acquired manually or at a cost. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. Here, we will describe how to create an incremental classification system based on a statistical model that is trained from empirical dat...

  6. Diagnosis of small hepatocellular carcinoma by incremental dynamic CT

    International Nuclear Information System (INIS)

    Uchida, Masafumi; Kumabe, Tsutomu; Edamitsu, Osamu

    1993-01-01

    Thirty cases of pathologically confirmed small hepatocellular carcinoma were examined by Incremental Dynamic CT (ICT). ICT scanned the whole liver with single-breath-hold technique; therefore, effective early contrast enhancement could be obtained for diagnosis. Among the 30 tumors, 26 were detected. The detection rate was 87%. A high detection rate was obtained in tumors more than 20 mm in diameter. Twenty-two of 26 tumors could be diagnosed correctly. ICT examination was useful for detection of small hepatocellular carcinoma. (author)

  7. A parallel ILP algorithm that incorporates incremental batch learning

    OpenAIRE

    Nuno Fonseca; Rui Camacho; Fernado Silva

    2003-01-01

    In this paper we tackle the problems of eciency and scala-bility faced by Inductive Logic Programming (ILP) systems. We proposethe use of parallelism to improve eciency and the use of an incrementalbatch learning to address the scalability problem. We describe a novelparallel algorithm that incorporates into ILP the method of incremen-tal batch learning. The theoretical complexity of the algorithm indicatesthat a linear speedup can be achieved.

  8. Public Key Infrastructure Increment 2 (PKI Inc 2)

    Science.gov (United States)

    2016-03-01

    across the Global Information Grid (GIG) and at rest. Using authoritative data, obtained via face-to-face identity proofing, PKI creates a credential ...operating on a network by provision of assured PKI-based credentials for any device on that network. ​​​​PKI Increment One made significant...provide assured/secure validation of revocation of an electronic/ digital credential . 2.DoD PKI shall support assured revocation status requests of

  9. The intermetallic ThRh5: microstructure and enthalpy increments

    International Nuclear Information System (INIS)

    Banerjee, Aparna; Joshi, A.R.; Kaity, Santu; Mishra, R.; Roy, S.B.

    2013-01-01

    Actinide intermetallics are one of the most interesting and important series of compounds. Thermochemistry of these compounds play significant role in understand the nature of bonding in alloys and nuclear fuel performance. In the present paper we report synthesis and characterization of thorium based intermetallic compound ThRh 5 (s) by SEM/EDX technique. The mechanical properties and enthalpy increment as a function of temperature of the alloy has been measured. (author)

  10. Systematic Luby Transform codes as incremental redundancy scheme

    CSIR Research Space (South Africa)

    Grobler, TL

    2011-09-01

    Full Text Available Transform Codes as Incremental Redundancy Scheme T. L. Grobler y, E. R. Ackermann y, J. C. Olivier y and A. J. van Zylz Department of Electrical, Electronic and Computer Engineering University of Pretoria, Pretoria 0002, South Africa Email: trienkog...@gmail.com, etienne.ackermann@ieee.org yDefence, Peace, Safety and Security (DPSS) Council for Scientific and Industrial Research (CSIR), Pretoria 0001, South Africa zDepartment of Mathematics and Applied Mathematics University of Pretoria, Pretoria 0002, South...

  11. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  12. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  13. Health level seven interoperability strategy: big data, incrementally structured.

    Science.gov (United States)

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  14. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  15. Conservation of wildlife populations: factoring in incremental disturbance.

    Science.gov (United States)

    Stewart, Abbie; Komers, Petr E

    2017-06-01

    Progressive anthropogenic disturbance can alter ecosystem organization potentially causing shifts from one stable state to another. This potential for ecosystem shifts must be considered when establishing targets and objectives for conservation. We ask whether a predator-prey system response to incremental anthropogenic disturbance might shift along a disturbance gradient and, if it does, whether any disturbance thresholds are evident for this system. Development of linear corridors in forested areas increases wolf predation effectiveness, while high density of development provides a safe-haven for their prey. If wolves limit moose population growth, then wolves and moose should respond inversely to land cover disturbance. Using general linear model analysis, we test how the rate of change in moose ( Alces alces ) density and wolf ( Canis lupus ) harvest density are influenced by the rate of change in land cover and proportion of land cover disturbed within a 300,000 km 2 area in the boreal forest of Alberta, Canada. Using logistic regression, we test how the direction of change in moose density is influenced by measures of land cover change. In response to incremental land cover disturbance, moose declines occurred where 43% of land cover was disturbed and wolf density declined. Wolves and moose appeared to respond inversely to incremental disturbance with the balance between moose decline and wolf increase shifting at about 43% of land cover disturbed. Conservation decisions require quantification of disturbance rates and their relationships to predator-prey systems because ecosystem responses to anthropogenic disturbance shift across disturbance gradients.

  16. Context-dependent incremental timing cells in the primate hippocampus.

    Science.gov (United States)

    Sakon, John J; Naya, Yuji; Wirth, Sylvia; Suzuki, Wendy A

    2014-12-23

    We examined timing-related signals in primate hippocampal cells as animals performed an object-place (OP) associative learning task. We found hippocampal cells with firing rates that incrementally increased or decreased across the memory delay interval of the task, which we refer to as incremental timing cells (ITCs). Three distinct categories of ITCs were identified. Agnostic ITCs did not distinguish between different trial types. The remaining two categories of cells signaled time and trial context together: One category of cells tracked time depending on the behavioral action required for a correct response (i.e., early vs. late release), whereas the other category of cells tracked time only for those trials cued with a specific OP combination. The context-sensitive ITCs were observed more often during sessions where behavioral learning was observed and exhibited reduced incremental firing on incorrect trials. Thus, single primate hippocampal cells signal information about trial timing, which can be linked with trial type/context in a learning-dependent manner.

  17. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  18. Evaluation of incremental reactivity and its uncertainty in Southern California.

    Science.gov (United States)

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  19. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  20. Incremental Innovation and Competitive Pressure in the Presence of Discrete Innovation

    DEFF Research Database (Denmark)

    Ghosh, Arghya; Kato, Takao; Morita, Hodaka

    2017-01-01

    Technical progress consists of improvements made upon the existing technology (incremental innovation) and innovative activities aiming at entirely new technology (discrete innovation). Incremental innovation is often of limited relevance to the new technology invented by successful discrete...

  1. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2006-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  2. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  3. An Approach to Incremental Design of Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...... strategies of functionality so that the already running functionality is not disturbed and there is a good chance that, later, new functionality can easily be mapped on the resulted system. The mapping and scheduling for hard real-time embedded systems are considered the context of a realistic communication...

  4. From incremental to fundamental substitution in chemical alternatives assessment

    DEFF Research Database (Denmark)

    Fantke, Peter; Weber, Roland; Scheringer, Martin

    2015-01-01

    to similarity in chemical structures and, hence, similar hazard profiles between phase-out and substitute chemicals, leading to a rather incremental than fundamental substitution. A hampered phase-out process, the lack of implementing Green Chemistry principles in chemicals design, and lack of Sustainable...... an integrated approach of all stakeholders involved toward more fundamental and function-based substitution by greener and more sustainable alternatives. Our recommendations finally constitute a starting point for identifying further research needs and for improving current alternatives assessment practice....

  5. Automating the Incremental Evolution of Controllers for Physical Robots

    DEFF Research Database (Denmark)

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...

  6. Transferring the Incremental Capacity Analysis to Lithium-Sulfur Batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Kalogiannis, Theodoros; Purkayastha, Rajlakshmi

    2017-01-01

    In order to investigate the battery degradation and to estimate their health, various techniques can be applied. One of them, which is widely used for Lithium-ion batteries, is the incremental capacity analysis (ICA). In this work, we apply the ICA to Lithium-Sulfur batteries, which differ in many...... aspects from Lithium-ion batteries and possess unique behavior. One of the challenges of applying the ICA to Lithium-Sulfur batteries is the representation of the IC curves, as their voltage profiles are often non-monotonic, resulting in more complex IC curves. The ICA is at first applied to charge...

  7. Failure mechanisms in single-point incremental forming of metals

    DEFF Research Database (Denmark)

    Silva, Maria B.; Nielsen, Peter Søe; Bay, Niels

    2011-01-01

    The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Each...... on formability limits and development of fracture. The unified view conciliates the aforementioned different explanations on the role of necking in fracture and is consistent with the experimental observations that have been reported in the past years. The work is performed on aluminium AA1050-H111 sheets...

  8. Single Point Incremental Forming using a Dummy Sheet

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, Beatriz; Bay, Niels

    2007-01-01

    A new version of single point incremental forming (SPIF) is presented. This version includes a dummy sheet on top of the work piece, thus forming two sheets instead of one. The dummy sheet, which is in contact with the rotating tool pin, is discarded after forming. The new set-up influences....... The possible influence of friction between the two sheets is furthermore investigated. The results show that the use of a dummy sheet reduces wear of the work piece to almost zero, but also causes a decrease in formability. Bulging of the planar sides of the pyramid is reduced and surface roughness...

  9. Switch-mode High Voltage Drivers for Dielectric Electro Active Polymer (DEAP) Incremental Actuators

    DEFF Research Database (Denmark)

    Thummala, Prasanth

    voltage DC-DC converters for driving the DEAP based incremental actuators. The DEAP incremental actuator technology has the potential to be used in various industries, e.g., automotive, space and medicine. The DEAP incremental actuator consists of three electrically isolated and mechanically connected...

  10. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... increment sensitivity index (SISI) adapter. (a) Identification. A short increment sensitivity index (SISI...

  11. Incremental Learning of Skill Collections based on Intrinsic Motivation

    Directory of Open Access Journals (Sweden)

    Jan Hendrik Metzen

    2013-07-01

    Full Text Available Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-independent skill discoveryapproach that is suited for continuous domains. Furthermore, the agent learnsspecific skills based on intrinsic motivation mechanisms thatdetermine on which skills learning is focused at a given point in time. Weevaluate the approach in a reinforcement learning setup in two continuousdomains with complex dynamics. We show that an intrinsically motivated, skilllearning agent outperforms an agent which learns task solutions from scratch.Furthermore, we compare different intrinsic motivation mechanisms and howefficiently they make use of the agent's developmental period.

  12. Optimal Output of Distributed Generation Based On Complex Power Increment

    Science.gov (United States)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  13. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  14. Adaptive Incremental Genetic Algorithm for Task Scheduling in Cloud Environments

    Directory of Open Access Journals (Sweden)

    Kairong Duan

    2018-05-01

    Full Text Available Cloud computing is a new commercial model that enables customers to acquire large amounts of virtual resources on demand. Resources including hardware and software can be delivered as services and measured by specific usage of storage, processing, bandwidth, etc. In Cloud computing, task scheduling is a process of mapping cloud tasks to Virtual Machines (VMs. When binding the tasks to VMs, the scheduling strategy has an important influence on the efficiency of datacenter and related energy consumption. Although many traditional scheduling algorithms have been applied in various platforms, they may not work efficiently due to the large number of user requests, the variety of computation resources and complexity of Cloud environment. In this paper, we tackle the task scheduling problem which aims to minimize makespan by Genetic Algorithm (GA. We propose an incremental GA which has adaptive probabilities of crossover and mutation. The mutation and crossover rates change according to generations and also vary between individuals. Large numbers of tasks are randomly generated to simulate various scales of task scheduling problem in Cloud environment. Based on the instance types of Amazon EC2, we implemented virtual machines with different computing capacity on CloudSim. We compared the performance of the adaptive incremental GA with that of Standard GA, Min-Min, Max-Min , Simulated Annealing and Artificial Bee Colony Algorithm in finding the optimal scheme. Experimental results show that the proposed algorithm can achieve feasible solutions which have acceptable makespan with less computation time.

  15. Incremental support vector machines for fast reliable image recognition

    International Nuclear Information System (INIS)

    Makili, L.; Vega, J.; Dormido-Canto, S.

    2013-01-01

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency

  16. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Science.gov (United States)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  17. STS-102 Expedition 2 Increment and Science Briefing

    Science.gov (United States)

    2001-01-01

    Merri Sanchez, Expedition 2 Increment Manager, John Uri, Increment Scientist, and Lybrease Woodard, Lead Payload Operations Director, give an overview of the upcoming activities and objectives of the Expedition 2's (E2's) mission in this prelaunch press conference. Ms. Sanchez describes the crew rotation of Expedition 1 to E2, the timeline E2 will follow during their stay on the International Space Station (ISS), and the various flights going to the ISS and what each will bring to ISS. Mr. Uri gives details on the on-board experiments that will take place on the ISS in the fields of microgravity research, commercial, earth, life, and space sciences (such as radiation characterization, H-reflex, colloids formation and interaction, protein crystal growth, plant growth, fermentation in microgravity, etc.). He also gives details on the scientific facilities to be used (laboratory racks and equipment such as the human torso facsimile or 'phantom torso'). Ms. Woodard gives an overview of Marshall Flight Center's role in the mission. Computerized simulations show the installation of the Space Station Remote Manipulator System (SSRMS) onto the ISS and the installation of the airlock using SSRMS. Live footage shows the interior of the ISS, including crew living quarters, the Progress Module, and the Destiny Laboratory. The three then answer questions from the press.

  18. Efficient incremental relaying for packet transmission over fading channels

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-07-01

    In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.

  19. Incremental learning of concept drift in nonstationary environments.

    Science.gov (United States)

    Elwell, Ryan; Polikar, Robi

    2011-10-01

    We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE

  20. Incremental learning of skill collections based on intrinsic motivation

    Science.gov (United States)

    Metzen, Jan H.; Kirchner, Frank

    2013-01-01

    Life-long learning of reusable, versatile skills is a key prerequisite for embodied agents that act in a complex, dynamic environment and are faced with different tasks over their lifetime. We address the question of how an agent can learn useful skills efficiently during a developmental period, i.e., when no task is imposed on him and no external reward signal is provided. Learning of skills in a developmental period needs to be incremental and self-motivated. We propose a new incremental, task-independent skill discovery approach that is suited for continuous domains. Furthermore, the agent learns specific skills based on intrinsic motivation mechanisms that determine on which skills learning is focused at a given point in time. We evaluate the approach in a reinforcement learning setup in two continuous domains with complex dynamics. We show that an intrinsically motivated, skill learning agent outperforms an agent which learns task solutions from scratch. Furthermore, we compare different intrinsic motivation mechanisms and how efficiently they make use of the agent's developmental period. PMID:23898265

  1. Incremental support vector machines for fast reliable image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  2. Incremental first pass technique to measure left ventricular ejection fraction

    International Nuclear Information System (INIS)

    Kocak, R.; Gulliford, P.; Hoggard, C.; Critchley, M.

    1980-01-01

    An incremental first pass technique was devised to assess the acute effects of any drug on left ventricular ejection fraction (LVEF) with or without a physiological stress. In particular, the effects of the vasodilater isosorbide dinitrate on LVEF before and after exercise were studied in 11 patients who had suffered cardiac failure. This was achieved by recording the passage of sup(99m)Tc pertechnetate through the heart at each stage of the study using a gamma camera computer system. Consistent values for four consecutive first pass values without exercise or drug in normal subjects illustrated the reproducibility of the technique. There was no significant difference between LVEF values obtained at rest and exercise before or after oral isosorbide dinitrate with the exception of one patient with gross mitral regurgitation. The advantages of the incremental first pass technique are that the patient need not be in sinus rhythm, the effects of physiological intervention may be studied and tests may also be repeated at various intervals during long term follow-up of patients. A disadvantage of the method is the limitation in the number of sequential measurements which can be carried out due to the amount of radioactivity injected. (U.K.)

  3. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu

    2017-08-02

    Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.

  4. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  5. Rapid Prototyping by Single Point Incremental Forming of Sheet Metal

    DEFF Research Database (Denmark)

    Skjødt, Martin

    2008-01-01

    . The process is incremental forming since plastic deformation takes place in a small local zone underneath the forming tool, i.e. the sheet is formed as a summation of the movement of the local plastic zone. The process is slow and therefore only suited for prototypes or small batch production. On the other...... in the plastic zone. Using these it is demonstrated that the growth rate of accumulated damage in SPIF is small compared to conventional sheet forming processes. This combined with an explanation why necking is suppressed is a new theory stating that SPIF is limited by fracture and not necking. The theory...... SPIF. A multi stage strategy is presented which allows forming of a cup with vertical sides in about half of the depth. It is demonstrated that this results in strain paths which are far from straight, but strains are still limited by a straight fracture line in the principal strain space. The multi...

  6. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 data points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.

  7. Incremental and developmental perspectives for general-purpose learning systems

    Directory of Open Access Journals (Sweden)

    Fernando Martínez-Plumed

    2017-02-01

    Full Text Available The stupefying success of Articial Intelligence (AI for specic problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of (dierent problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. It presents a declarative general-purpose learning system and a developmental and lifelong approach for knowledge acquisition, consolidation and forgetting. It also analyses the use of the use of more ability-oriented evaluation techniques for AI evaluation and provides further insight for the understanding of the concepts of development and incremental learning in AI systems.

  8. Improving process performance in Incremental Sheet Forming (ISF)

    International Nuclear Information System (INIS)

    Ambrogio, G.; Filice, L.; Manco, G. L.

    2011-01-01

    Incremental Sheet Forming (ISF) is a relatively new process in which a sheet clamped along the borders is progressively deformed through a hemispherical tool. The tool motion is CNC controlled and the path is designed using a CAD-CAM approach, with the aim to reproduce the final shape contour such as in the surface milling. The absence of a dedicated setup and the related high flexibility is the main point of strength and the reason why several researchers focused their attentions on the ISF process.On the other hand the process slowness is the most relevant drawback which reduces a wider industrial application. In the paper, a first attempt to overcome this process limitation is presented taking into account a relevant speed increasing respect to the values currently used.

  9. Corrections of the NIST Statistical Test Suite for Randomness

    OpenAIRE

    Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio

    2004-01-01

    It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.

  10. Studi Kompresi Data dengan Metode Arithmetic Coding

    OpenAIRE

    Santoso, Petrus

    2001-01-01

    In Bahasa Indonesia : Ada banyak sekali metode kompresi data yang ada saat ini. Sebagian besar metode tersebut bisa dikelompokkan ke dalam salah satu dari dua kelompok besar, statistical based dan dictionary based. Contoh dari dictionary based coding adalah Lempel Ziv Welch dan contoh dari statistical based coding adalah Huffman Coding dan Arithmetic Coding yang merupakan algoritma terbaru. Makalah ini mengulas prinsip-prinsip dari Arithmetic Coding serta keuntungan-keuntungannya dibandi...

  11. A user-friendly tool for incremental haemodialysis prescription.

    Science.gov (United States)

    Casino, Francesco Gaetano; Basile, Carlo

    2018-01-05

    There is a recently heightened interest in incremental haemodialysis (IHD), the main advantage of which could likely be a better preservation of the residual kidney function of the patients. The implementation of IHD, however, is hindered by many factors, among them, the mathematical complexity of its prescription. The aim of our study was to design a user-friendly tool for IHD prescription, consisting of only a few rows of a common spreadsheet. The keystone of our spreadsheet was the following fundamental concept: the dialysis dose to be prescribed in IHD depends only on the normalized urea clearance provided by the native kidneys (KRUn) of the patient for each frequency of treatment, according to the variable target model recently proposed by Casino and Basile (The variable target model: a paradigm shift in the incremental haemodialysis prescription. Nephrol Dial Transplant 2017; 32: 182-190). The first step was to put in sequence a series of equations in order to calculate, firstly, KRUn and, then, the key parameters to be prescribed for an adequate IHD; the second step was to compare KRUn values obtained with our spreadsheet with KRUn values obtainable with the gold standard Solute-solver (Daugirdas JT et al., Solute-solver: a web-based tool for modeling urea kinetics for a broad range of hemodialysis schedules in multiple patients. Am J Kidney Dis 2009; 54: 798-809) in a sample of 40 incident haemodialysis patients. Our spreadsheet provided excellent results. The differences with Solute-solver were clinically negligible. This was confirmed by the Bland-Altman plot built to analyse the agreement between KRUn values obtained with the two methods: the difference was 0.07 ± 0.05 mL/min/35 L. Our spreadsheet is a user-friendly tool able to provide clinically acceptable results in IHD prescription. Two immediate consequences could derive: (i) a larger dissemination of IHD might occur; and (ii) our spreadsheet could represent a useful tool for an ineludibly

  12. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    OpenAIRE

    Vermeulen, Patrick; Bosch, Frans; Volberda, Henk

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation. In this paper, we use an institutional perspective to investigate why established firms in the financial services industry struggle with their complex incremental product innovation efforts. We ar...

  13. Incremental cost of PACS in a medical intensive care unit

    Science.gov (United States)

    Langlotz, Curtis P.; Cleff, Bridget; Even-Shoshan, Orit; Bozzo, Mary T.; Redfern, Regina O.; Brikman, Inna; Seshadri, Sridhar B.; Horii, Steven C.; Kundel, Harold L.

    1995-05-01

    Our purpose is to determine the incremental costs (or savings) due to the introduction of picture archiving and communication systems (PACS) and computed radiology (CR) in a medical intensive care unit (MICU). Our economic analysis consists of three measurement methods. The first method is an assessment of the direct costs to the radiology department, implemented in a spreadsheet model. The second method consists of a series of brief observational studies to measure potential changes in personnel costs that might not be reflected in administrative claims. The third method (results not reported here) is a multivariate modeling technique which estimates the independent effect of PACS/CR on the cost of care (estimated from administrative claims data), while controlling for clinical case- mix variables. Our direct cost model shows no cost savings to the radiology department after the introduction of PACS in the medical intensive care unit. Savings in film supplies and film library personnel are offset by increases in capital equipment costs and PACS operation personnel. The results of observational studies to date demonstrate significant savings in clinician film-search time, but no significant change in technologist time or lost films. Our model suggests that direct radiology costs will increase after the limited introduction of PACS/CR in the MICU. Our observational studies show a small but significant effect on clinician film search time by the introduction of PACS/CR in the MICU, but no significant effect on other variables. The projected costs of a hospital-wide PACS are currently under study.

  14. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Science.gov (United States)

    Gan, Wensheng; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038

  15. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab

    2017-08-22

    Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.

  16. A novel instrument for generating angular increments of 1 nanoradian

    Science.gov (United States)

    Alcock, Simon G.; Bugnar, Alex; Nistea, Ioana; Sawhney, Kawal; Scott, Stewart; Hillman, Michael; Grindrod, Jamie; Johnson, Iain

    2015-12-01

    Accurate generation of small angles is of vital importance for calibrating angle-based metrology instruments used in a broad spectrum of industries including mechatronics, nano-positioning, and optic fabrication. We present a novel, piezo-driven, flexure device capable of reliably generating micro- and nanoradian angles. Unlike many such instruments, Diamond Light Source's nano-angle generator (Diamond-NANGO) does not rely on two separate actuators or rotation stages to provide coarse and fine motion. Instead, a single Physik Instrumente NEXLINE "PiezoWalk" actuator provides millimetres of travel with nanometre resolution. A cartwheel flexure efficiently converts displacement from the linear actuator into rotary motion with minimal parasitic errors. Rotation of the flexure is directly measured via a Magnescale "Laserscale" angle encoder. Closed-loop operation of the PiezoWalk actuator, using high-speed feedback from the angle encoder, ensures that the Diamond-NANGO's output drifts by only ˜0.3 nrad rms over ˜30 min. We show that the Diamond-NANGO can reliably move with unprecedented 1 nrad (˜57 ndeg) angular increments over a range of >7000 μrad. An autocollimator, interferometer, and capacitive displacement sensor are used to independently confirm the Diamond-NANGO's performance by simultaneously measuring the rotation of a reflective cube.

  17. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  18. Validation of daily increments periodicity in otoliths of spotted gar

    Science.gov (United States)

    Snow, Richard A.; Long, James M.; Frenette, Bryan D.

    2017-01-01

    Accurate age and growth information is essential in successful management of fish populations and for understanding early life history. We validated daily increment deposition, including the timing of first ring formation, for spotted gar (Lepisosteus oculatus) through 127 days post hatch. Fry were produced from hatchery-spawned specimens, and up to 10 individuals per week were sacrificed and their otoliths (sagitta, lapillus, and asteriscus) removed for daily age estimation. Daily age estimates for all three otolith pairs were significantly related to known age. The strongest relationships existed for measurements from the sagitta (r2 = 0.98) and the lapillus (r2 = 0.99) with asteriscus (r2 = 0.95) the lowest. All age prediction models resulted in a slope near unity, indicating that ring deposition occurred approximately daily. Initiation of ring formation varied among otolith types, with deposition beginning 3, 7, and 9 days for the sagitta, lapillus, and asteriscus, respectively. Results of this study suggested that otoliths are useful to estimate daily age of spotted gar juveniles; these data may be used to back calculate hatch dates, estimate early growth rates, and correlate with environmental factor that influence spawning in wild populations. is early life history information will be valuable in better understanding the ecology of this species. 

  19. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  20. VOLATILITAS RELEVANSI NILAI INCREMENTAL DARI LABA DAN NILAI BUKU

    Directory of Open Access Journals (Sweden)

    B. Linggar Yekti Nugraheni

    2012-03-01

    Full Text Available Dalam penelitian ini dikaji relevansi nilai pola volatilitas pendapatan dan equitas nilai buku. Diprediksi bahwa relevansi nilai volatilitas adalah dikaitkan dengan horizon waktu. Menggunakan model Ohslon, hipotesis yang dikembangkan adalah (1 laba dan nilai buku ekuitas berhubungan positif dengan harga saham, dan (2 ada penurunan atau peningkatan patern relevansi nilai tambahan. Sampel yang digunakan dalam penelitian ini adalah perusahaan manufaktur yang terdaftar di BEI (Bursa Efek Indonesia selama periode 1998-2007. Hasilnya menunjukkan bahwa earnings dan nilai buku ekuitas terkait secara positif dengan harga saham dan relevansi nilai laba menurun sementara ekuitas nilai buku ekuitas selama periode pengamatan. This research investigates the value of relevance volatility patern of earnings and book value equity. It is predicted that the volatility of value relevance is associated with the time horizon. Using the Ohslon model, the hypothesis developed are: (1 earnings and book value equity are associated positively with the stock price (2 There is decrease or increase patern of incremental value relevance. The sample used in this research is manufacturing companies listed in ISE (Indonesia Stock Exchange during 1998-2007 periods. The result shows that earnings and book value equity are related positively with stock price and value relevance of earnings is decreasing while book value equity is increasing during the observation periods.

  1. Automated Dimension Determination for NMF-based Incremental Collaborative Filtering

    Directory of Open Access Journals (Sweden)

    Xiwei Wang

    2015-12-01

    Full Text Available The nonnegative matrix factorization (NMF based collaborative filtering t e chniques h a ve a c hieved great success in product recommendations. It is well known that in NMF, the dimensions of the factor matrices have to be determined in advance. Moreover, data is growing fast; thus in some cases, the dimensions need to be changed to reduce the approximation error. The recommender systems should be capable of updating new data in a timely manner without sacrificing the prediction accuracy. In this paper, we propose an NMF based data update approach with automated dimension determination for collaborative filtering purposes. The approach can determine the dimensions of the factor matrices and update them automatically. It exploits the nearest neighborhood based clustering algorithm to cluster users and items according to their auxiliary information, and uses the clusters as the constraints in NMF. The dimensions of the factor matrices are associated with the cluster quantities. When new data becomes available, the incremental clustering algorithm determines whether to increase the number of clusters or merge the existing clusters. Experiments on three different datasets (MovieLens, Sushi, and LibimSeTi were conducted to examine the proposed approach. The results show that our approach can update the data quickly and provide encouraging prediction accuracy.

  2. Incremental Dynamic Analysis of Koyna Dam under Repeated Ground Motions

    Science.gov (United States)

    Zainab Nik Azizan, Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar; Abdullah, Junaidah

    2018-03-01

    This paper discovers the incremental dynamic analysis (IDA) of concrete gravity dam under single and repeated earthquake loadings to identify the limit state of the dam. Seven ground motions with horizontal and vertical direction as seismic input considered in the nonlinear dynamic analysis based on the real repeated earthquake in the worldwide. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. The scaled was depends on the fundamental period, T1 of the dam. The Koyna dam has been selected as a case study for the purpose of the analysis by assuming that no sliding and rigid foundation, has been estimated. IDA curves for Koyna dam developed for single and repeated ground motions and the performance level of the dam identifies. The IDA curve of repeated ground motion shown stiffer rather than single ground motion. The ultimate state displacement for a single event is 45.59mm and decreased to 39.33mm under repeated events which are decreased about 14%. This showed that the performance level of the dam based on seismic loadings depend on ground motion pattern.

  3. Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

    Science.gov (United States)

    Roberts, Seán G.

    2018-01-01

    This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487

  4. Numerical Simulation of Incremental Sheet Forming by Simplified Approach

    Science.gov (United States)

    Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.

    2011-01-01

    The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.

  5. Distribution of incremental static stress caused by earthquakes

    Directory of Open Access Journals (Sweden)

    Y. Y. Kagan

    1994-01-01

    Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.

  6. Noise masking of S-cone increments and decrements.

    Science.gov (United States)

    Wang, Quanhong; Richters, David P; Eskew, Rhea T

    2014-11-12

    S-cone increment and decrement detection thresholds were measured in the presence of bipolar, dynamic noise masks. Noise chromaticities were the L-, M-, and S-cone directions, as well as L-M, L+M, and achromatic (L+M+S) directions. Noise contrast power was varied to measure threshold Energy versus Noise (EvN) functions. S+ and S- thresholds were similarly, and weakly, raised by achromatic noise. However, S+ thresholds were much more elevated by S, L+M, L-M, L- and M-cone noises than were S- thresholds, even though the noises consisted of two symmetric chromatic polarities of equal contrast power. A linear cone combination model accounts for the overall pattern of masking of a single test polarity well. L and M cones have opposite signs in their effects upon raising S+ and S- thresholds. The results strongly indicate that the psychophysical mechanisms responsible for S+ and S- detection, presumably based on S-ON and S-OFF pathways, are distinct, unipolar mechanisms, and that they have different spatiotemporal sampling characteristics, or contrast gains, or both. © 2014 ARVO.

  7. Business Collaboration in Food Networks: Incremental Solution Development

    Directory of Open Access Journals (Sweden)

    Harald Sundmaeker

    2014-10-01

    Full Text Available The paper will present an approach for an incremental solution development that is based on the usage of the currently developed Internet based FIspace business collaboration platform. Key element is the clear segmentation of infrastructures that are either internal or external to the collaborating business entity in the food network. On the one hand, the approach enables to differentiate between specific centralised as well as decentralised ways for data storage and hosting of IT based functionalities. The selection of specific dataexchange protocols and data models is facilitated. On the other hand, the supported solution design and subsequent development is focusing on reusable “software Apps” that can be used on their own and are incorporating a clear added value for the business actors. It will be outlined on how to push the development and introduction of Apps that do not require basic changes of the existing infrastructure. The paper will present an example that is based on the development of a set of Apps for the exchange of product quality related information in food networks, specifically addressing fresh fruits and vegetables. It combines workflow support for data exchange from farm to retail as well as to provide quality feedback information to facilitate the business process improvement. Finally, the latest status of theFIspace platform development will be outlined. Key features and potential ways for real users and software developers in using the FIspace platform that is initiated by science and industry will be outlined.

  8. Automating the Incremental Evolution of Controllers for Physical Robots.

    Science.gov (United States)

    Faíña, Andrés; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben, Kernbach, & Haasdijk [2012, p. 261] put it, "the evolution of things, rather than just the evolution of digital objects.…" The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range of problems amenable to embodied evolution.

  9. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  10. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  11. Appropriate use of the increment entropy for electrophysiological time series.

    Science.gov (United States)

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. An Incremental Type-2 Meta-Cognitive Extreme Learning Machine.

    Science.gov (United States)

    Pratama, Mahardhika; Zhang, Guangquan; Er, Meng Joo; Anavatti, Sreenatha

    2017-02-01

    Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.

  13. Parsing multiple processes of high temperature impacts on corn/soybean yield using a newly developed CLM-APSIM modeling framework

    Science.gov (United States)

    Peng, B.; Guan, K.; Chen, M.

    2016-12-01

    Future agricultural production faces a grand challenge of higher temperature under climate change. There are multiple physiological or metabolic processes of how high temperature affects crop yield. Specifically, we consider the following major processes: (1) direct temperature effects on photosynthesis and respiration; (2) speed-up growth rate and the shortening of growing season; (3) heat stress during reproductive stage (flowering and grain-filling); (4) high-temperature induced increase of atmospheric water demands. In this work, we use a newly developed modeling framework (CLM-APSIM) to simulate the corn and soybean growth and explicitly parse the above four processes. By combining the strength of CLM in modeling surface biophysical (e.g., hydrology and energy balance) and biogeochemical (e.g., photosynthesis and carbon-nitrogen interactions), as well as that of APSIM in modeling crop phenology and reproductive stress, the newly developed CLM-APSIM modeling framework enables us to diagnose the impacts of high temperature stress through different processes at various crop phenology stages. Ground measurements from the advanced SoyFACE facility at University of Illinois is used here to calibrate, validate, and improve the CLM-APSIM modeling framework at the site level. We finally use the CLM-APSIM modeling framework to project crop yield for the whole US Corn Belt under different climate scenarios.

  14. Lactate and ammonia concentration in blood and sweat during incremental cycle ergometer exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Mook, GA; Gips, CH; Verkerke, GJ

    It is known that the concentrations of ammonia and lactate in blood increase during incremental exercise. Sweat also contains lactate and ammonia. The aim of the present study was to investigate the physiological response of lactate and ammonia in plasma and sweat during a stepwise incremental cycle

  15. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  16. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    Science.gov (United States)

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…

  17. Do otolith increments allow correct inferences about age and growth of coral reef fishes?

    Science.gov (United States)

    Booth, D. J.

    2014-03-01

    Otolith increment structure is widely used to estimate age and growth of marine fishes. Here, I test the accuracy of the long-term otolith increment analysis of the lemon damselfish Pomacentrus moluccensis to describe age and growth characteristics. I compare the number of putative annual otolith increments (as a proxy for actual age) and widths of these increments (as proxies for somatic growth) with actual tagged fish-length data, based on a 6-year dataset, the longest time course for a coral reef fish. Estimated age from otoliths corresponded closely with actual age in all cases, confirming annual increment formation. However, otolith increment widths were poor proxies for actual growth in length [linear regression r 2 = 0.44-0.90, n = 6 fish] and were clearly of limited value in estimating annual growth. Up to 60 % of the annual growth variation was missed using otolith increments, suggesting the long-term back calculations of otolith growth characteristics of reef fish populations should be interpreted with caution.

  18. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    Science.gov (United States)

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  19. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    Science.gov (United States)

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  20. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  1. Annual increments, specific gravity and energy of Eucalyptus grandis by gamma-ray attenuation technique

    International Nuclear Information System (INIS)

    Rezende, M.A.; Guerrini, I.A.; Ferraz, E.S.B.

    1990-01-01

    Specific gravity annual increments in volume, mass and energy of Eucalyptus grandis at thirteen years of age were made taking into account measurements of the calorific value for wood. It was observed that the calorific value for wood decrease slightly, while the specific gravity increase significantly with age. The so-called culmination age for the Annual Volume Increment was determined to be around fourth year of growth while for the Annual Mass and Energy Increment was around the eighty year. These results show that a tree in a particular age may not have a significant growth in volume, yet one is mass and energy. (author)

  2. Sustained change blindness to incremental scene rotation: a dissociation between explicit change detection and visual memory.

    Science.gov (United States)

    Hollingworth, Andrew; Henderson, John M

    2004-07-01

    In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.

  3. Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Martin, Corless

    2004-01-01

    We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.

  4. Performance and delay analysis of hybrid ARQ with incremental redundancy over double rayleigh fading channels

    KAUST Repository

    Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Barry, John R.; Pä tzold, Matthias

    2014-01-01

    the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum

  5. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition

    National Research Council Canada - National Science Library

    Lovett, Andrew; Dehghani, Morteza; Forbus, Kenneth

    2007-01-01

    .... This paper describes an incremental learning technique for opendomain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches...

  6. Performance of hybrid-ARQ with incremental redundancy over relay channels

    KAUST Repository

    Chelli, Ali; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ) with incremental redundancy (IR). The relay overhears

  7. Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction

    NARCIS (Netherlands)

    Sieberling, S.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper presents a flight control strategy based on nonlinear dynamic inversion. The approach presented, called incremental nonlinear dynamic inversion, uses properties of general mechanical systems and nonlinear dynamic inversion by feeding back angular accelerations. Theoretically, feedback of

  8. An Environment for Incremental Development of Distributed Extensible Asynchronous Real-time Systems

    Science.gov (United States)

    Ames, Charles K.; Burleigh, Scott; Briggs, Hugh C.; Auernheimer, Brent

    1996-01-01

    Incremental parallel development of distributed real-time systems is difficult. Architectural techniques and software tools developed at the Jet Propulsion Laboratory's (JPL's) Flight System Testbed make feasible the integration of complex systems in various stages of development.

  9. MUNIX and incremental stimulation MUNE in ALS patients and control subjects

    DEFF Research Database (Denmark)

    Furtula, Jasna; Johnsen, Birger; Christensen, Peter Broegger

    2013-01-01

    This study compares the new Motor Unit Number Estimation (MUNE) technique, MUNIX, with the more common incremental stimulation MUNE (IS-MUNE) with respect to reproducibility in healthy subjects and as potential biomarker of disease progression in patients with ALS....

  10. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    Science.gov (United States)

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  11. Unified performance analysis of hybrid-ARQ with incremental redundancy over free-space optical channels

    KAUST Repository

    Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we carry out a unified performance analysis of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) from an information theoretic perspective over a point-to-point free-space optical (FSO) system. First, we

  12. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    OpenAIRE

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  13. The Boundary Between Planning and Incremental Budgeting: Empirical Examination in a Publicly-Owned Corporation

    OpenAIRE

    S. K. Lioukas; D. J. Chambers

    1981-01-01

    This paper is a study within the field of public budgeting. It focuses on the capital budget, and it attempts to model and analyze the capital budgeting process using a framework previously developed in the literature of incremental budgeting. Within this framework the paper seeks to determine empirically whether the movement of capital expenditure budgets can be represented as the routine application of incremental adjustments over an existing base of allocations and whether further, forward...

  14. Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Secretary of Defense PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be...Date Assigned: Program Information Program Name Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) DoD Component Army Responsible

  15. Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A...Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A) DoD Component Air Force Responsible Office Program...APB) dated March 9, 2015 DCAPES Inc 2A 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments

  16. Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B...Information Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B) DoD Component Air Force Responsible Office...been established. DCAPES Inc 2B 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments (DCAPES) is

  17. Applying CLSM to increment core surfaces for histometric analyses: A novel advance in quantitative wood anatomy

    OpenAIRE

    Wei Liang; Ingo Heinrich; Gerhard Helle; I. Dorado Liñán; T. Heinken

    2013-01-01

    A novel procedure has been developed to conduct cell structure measurements on increment core samples of conifers. The procedure combines readily available hardware and software equipment. The essential part of the procedure is the application of a confocal laser scanning microscope (CLSM) which captures images directly from increment cores surfaced with the advanced WSL core-microtome. Cell wall and lumen are displayed with a strong contrast due to the monochrome black and green nature of th...

  18. A System to Derive Optimal Tree Diameter Increment Models from the Eastwide Forest Inventory Data Base (EFIDB)

    Science.gov (United States)

    Don C. Bragg

    2002-01-01

    This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...

  19. Estimation of incremental reactivities for multiple day scenarios: an application to ethane and dimethyoxymethane

    Science.gov (United States)

    Stockwell, William R.; Geiger, Harald; Becker, Karl H.

    Single-day scenarios are used to calculate incremental reactivities by definition (Carter, J. Air Waste Management Assoc. 44 (1994) 881-899.) but even unreactive organic compounds may have a non-negligible effect on ozone concentrations if multiple-day scenarios are considered. The concentration of unreactive compounds and their products may build up over a multiple-day period and the oxidation products may be highly reactive or highly unreactive affecting the overall incremental reactivity of the organic compound. We have developed a method for calculating incremental reactivities for multiple days based on a standard scenario for polluted European conditions. This method was used to estimate maximum incremental reactivities (MIR) and maximum ozone incremental reactivities (MOIR) for ethane and dimethyoxymethane for scenarios ranging from 1 to 6 days. It was found that the incremental reactivities increased as the length of the simulation period increased. The MIR of ethane increased faster than the value for dimethyoxymethane as the scenarios became longer. The MOIRs of ethane and dimethyoxymethane increased but the change was more modest for scenarios longer than 3 days. MOIRs of both volatile organic compounds were equal within the uncertainties of their chemical mechanisms by the 5 day scenario. These results show that dimethyoxymethane has an ozone forming potential on a per mass basis that is only somewhat greater than ethane if multiple-day scenarios are considered.

  20. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Science.gov (United States)

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  1. Calculation of the increment reduction in spruce stands by charcoal smoke

    Energy Technology Data Exchange (ETDEWEB)

    Guede, J

    1954-01-01

    Chronic damage to spruce trees by charcoal smoke, often hardly noticeable from outward appearance but causing marked reductions of wood increment can be determined by means of a calculation by increment cores. Sulfurous acid anhydride causes the closure of the stomates of needles by which the circulation of water is checked. The assimilation and the wood increment are reduced. The cores are taken from uninjured trees belonging to the dominant class. These trees are liable to irregular variations in the trend of growth only by atmospheric influences and disturbances in the circulation of water. The decrease of increment of a stand can be judged by the trend of growth of the basal area of sample trees. Two methods are applied: in the first method, the difference between the mean total increment before the damage has been caused and that after it is calculated by the yield table in deriving the site quality classes from the basal area growth of dominant stems. This is possible by using the mean diameter of each age class and the frequency curve of basal area for each site class. In the other method, the reduction of basal area increment of sample trees is measured directly. The total reduction of a stand can be judged by the share of the dominant class of stem in the total current growth of the basal area of a sound stand and by the percent of reduction of the sample trees.

  2. MAD parsing and conversion code

    International Nuclear Information System (INIS)

    Mokhov, Dmitri N.

    2000-01-01

    The authors describe design and implementation issues while developing an embeddable MAD language parser. Two working applications of the parser are also described, namely, MAD-> C++ converter and C++ factory. The report contains some relevant details about the parser and examples of converted code. It also describes some of the problems that were encountered and the solutions found for them

  3. Parsing Universal Dependencies without training

    NARCIS (Netherlands)

    Martínez Alonso, Héctor; Agić, Željko; Plank, Barbara; Søgaard, Anders

    2017-01-01

    We propose UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of head attachment rules. It features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training, and it is

  4. Partial dependency parsing for Irish

    OpenAIRE

    Uí Dhonnchadha, Elaine; van Genabith, Josef

    2010-01-01

    In this paper we present a partial dependency parser for Irish, in which Constraint Grammar (CG) rules are used to annotate dependency relations and grammatical functions in unrestricted Irish text. Chunking is performed using a regular-expression grammar which operates on the dependency tagged sentences. As this is the first implementation of a parser for unrestricted Irish text (to our knowledge), there were no guidelines or precedents available. Therefore deciding what constitutes a syntac...

  5. Parsing statistical machine translation output

    NARCIS (Netherlands)

    Carter, S.; Monz, C.; Vetulani, Z.

    2009-01-01

    Despite increasing research into the use of syntax during statistical machine translation, the incorporation of syntax into language models has seen limited success. We present a study of the discriminative abilities of generative syntax-based language models, over and above standard n-gram models,

  6. Stem analysis program (GOAP for evaluating of increment and growth data at individual tree

    Directory of Open Access Journals (Sweden)

    Gafura Aylak Özdemir

    2016-07-01

    Full Text Available Stem analysis is a method evaluating in a detailed way data of increment and growth of individual tree at the past periods and widely used in various forestry disciplines. Untreated data of stem analysis consist of annual ring count and measurement procedures performed on cross sections taken from individual tree by section method. The evaluation of obtained this untreated data takes quite some time. Thus, a computer software was developed in this study to quickly and efficiently perform stem analysis. This computer software developed to evaluate untreated data of stem analysis as numerical and graphical was programmed as macro by utilizing Visual Basic for Application feature of MS Excel 2013 program currently the most widely used. In developed this computer software, growth height model is formed from two different approaches, individual tree volume depending on section method, cross-sectional area, increments of diameter, height and volume, volume increment percent and stem form factor at breast height are calculated depending on desired period lengths. This calculated values are given as table. Development of diameter, height, volume, increments of these variables, volume increment percent and stem form factor at breast height according to periodic age are given as chart. Stem model showing development of diameter, height and shape of individual tree in the past periods also can be taken from computer software as chart.

  7. Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-09-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.

  8. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  9. Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).

    Science.gov (United States)

    Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H

    2015-01-01

    This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.

  10. Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy

    Directory of Open Access Journals (Sweden)

    Tarun K. Sen

    2011-11-01

    Full Text Available Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy Abstract Innovation in information technology is a primary driver for growth in developed economies. Research indicates that countries go through three stages in the adoption of innovation strategies: buying innovation through global trade, incremental innovation from other countries by enhancing efficiency, and, at the most developed stage, radically innovating independently for competitive advantage. The first two stages of innovation maturity depend more on cross-border trade than the third stage. In this paper, we find that IT professionals in in an emerging economy such as India believe in radical innovation over incremental innovation (adaptation as a growth strategy, even though competitive advantage may rest in adaptation. The results of the study report the preference for innovation strategies among IT professionals in India and its implications for other rapidly growing emerging economies.

  11. Determining frustum depth of 304 stainless steel plates with various diameters and thicknesses by incremental forming

    Energy Technology Data Exchange (ETDEWEB)

    Golabi, Sa' id [University of Kashan, Kashan (Iran, Islamic Republic of); Khazaali, Hossain [Bu-Ali Sina University, Hamedan (Iran, Islamic Republic of)

    2014-08-15

    Nowadays incremental forming is more popular because of its flexibility and cost saving. However, no engineering data is available for manufacturers for forming simple shapes like a frustum by incremental forming, and either expensive experimental tests or finite element analysis (FEA) should be employed to determine the depth of a frustum considering: thickness, material, cone diameter, wall angle, feed rate, tool diameter, etc. In this study, finite element technique, confirmed by experimental study, was employed for developing applicable curves for determining the depth of frustums made from 304 stainless steel (SS304) sheet with various cone angles, thicknesses from 0.3 to 1 mm and major diameters from 50 to 200 mm using incremental forming. Using these curves, the frustum angle and its depth knowing its thickness and major diameter can be predicted. The effects of feed rate, vertical pitch and tool diameter on frustum depth and surface quality were also addressed in this study.

  12. EFFECT OF COST INCREMENT DISTRIBUTION PATTERNS ON THE PERFORMANCE OF JIT SUPPLY CHAIN

    Directory of Open Access Journals (Sweden)

    Ayu Bidiawati J.R

    2008-01-01

    Full Text Available Cost is an important consideration in supply chain (SC optimisation. This is due to emphasis placed on cost reduction in order to optimise profit. Some researchers use cost as one of their performance measures and others propose ways of accurately calculating cost. As product moves across SC, the product cost also increases. This paper studied the effect of cost increment distribution patterns on the performance of a JIT Supply Chain. In particular, it is necessary to know if inventory allocation across SC needs to be modified to accommodate different cost increment distribution patterns. It was found that funnel is still the best card distribution pattern for JIT-SC regardless the cost increment distribution patterns used.

  13. Incremental Validity of the DSM-5 Section III Personality Disorder Traits With Respect to Psychosocial Impairment.

    Science.gov (United States)

    Simms, Leonard J; Calabrese, William R

    2016-02-01

    Traditional personality disorders (PDs) are associated with significant psychosocial impairment. DSM-5 Section III includes an alternative hybrid personality disorder (PD) classification approach, with both type and trait elements, but relatively little is known about the impairments associated with Section III traits. Our objective was to study the incremental validity of Section III traits--compared to normal-range traits, traditional PD criterion counts, and common psychiatric symptomatology--in predicting psychosocial impairment. To that end, 628 current/recent psychiatric patients completed measures of PD traits, normal-range traits, traditional PD criteria, psychiatric symptomatology, and psychosocial impairments. Hierarchical regressions revealed that Section III PD traits incrementally predicted psychosocial impairment over normal-range personality traits, PD criterion counts, and common psychiatric symptomatology. In contrast, the incremental effects for normal-range traits, PD symptom counts, and common psychiatric symptomatology were substantially smaller than for PD traits. These findings have implications for PD classification and the impairment literature more generally.

  14. The balanced scorecard: an incremental approach model to health care management.

    Science.gov (United States)

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  15. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    Science.gov (United States)

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  16. Incremental net social benefit associated with using nuclear-fueled power plants

    International Nuclear Information System (INIS)

    Maoz, I.

    1976-12-01

    The incremental net social benefit (INSB) resulting from nuclear-fueled, rather than coal-fired, electric power generation is assessed. The INSB is defined as the difference between the 'incremental social benefit' (ISB)--caused by the cheaper technology of electric power generation, and the 'incremental social cost' (ISC)--associated with an increased power production, which is induced by cheaper technology. Section 2 focuses on the theoretical and empirical problems associated with the assessment of the long-run price elasticity of the demand for electricity, and the theoretical-econometric considerations that lead to the reasonable estimates of price elasticities of demand from those provided by recent empirical studies. Section 3 covers the theoretical and empirical difficulties associated with the construction of the long-run social marginal cost curves (LRSMC) of electricity. Sections 4 and 5 discuss the assessment methodology and provide numerical examples for the calculation of the INSB resulting from nuclear-fueled power generation

  17. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  18. OXYGEN UPTAKE KINETICS DURING INCREMENTAL- AND DECREMENTAL-RAMP CYCLE ERGOMETRY

    Directory of Open Access Journals (Sweden)

    Fadıl Özyener

    2011-09-01

    Full Text Available The pulmonary oxygen uptake (VO2 response to incremental-ramp cycle ergometry typically demonstrates lagged-linear first-order kinetics with a slope of ~10-11 ml·min-1·W-1, both above and below the lactate threshold (ӨL, i.e. there is no discernible VO2 slow component (or "excess" VO2 above ӨL. We were interested in determining whether a reverse ramp profile would yield the same response dynamics. Ten healthy males performed a maximum incremental -ramp (15-30 W·min-1, depending on fitness. On another day, the work rate (WR was increased abruptly to the incremental maximum and then decremented at the same rate of 15-30 W.min-1 (step-decremental ramp. Five subjects also performed a sub-maximal ramp-decremental test from 90% of ӨL. VO2 was determined breath-by-breath from continuous monitoring of respired volumes (turbine and gas concentrations (mass spectrometer. The incremental-ramp VO2-WR slope was 10.3 ± 0.7 ml·min-1·W-1, whereas that of the descending limb of the decremental ramp was 14.2 ± 1.1 ml·min-1·W-1 (p < 0.005. The sub-maximal decremental-ramp slope, however, was only 9. 8 ± 0.9 ml·min-1·W-1: not significantly different from that of the incremental-ramp. This suggests that the VO2 response in the supra-ӨL domain of incremental-ramp exercise manifest not actual, but pseudo, first-order kinetics

  19. Maximal power output during incremental exercise by resistance and endurance trained athletes.

    Science.gov (United States)

    Sakthivelavan, D S; Sumathilatha, S

    2010-01-01

    This study was aimed at comparing the maximal power output by resistance trained and endurance trained athletes during incremental exercise. Thirty male athletes who received resistance training (Group I) and thirty male athletes of similar age group who received endurance training (Group II) for a period of more than 1 year were chosen for the study. Physical parameters were measured and exercise stress testing was done on a cycle ergometer with a portable gas analyzing system. The maximal progressive incremental cycle ergometer power output at peak exercise and carbon dioxide production at VO2max were measured. Highly significant (P biofeedback and perk up the athlete's performance.

  20. BMI and BMI SDS in childhood: annual increments and conditional change

    OpenAIRE

    Brannsether-Ellingsen, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Juliusson, Petur Benedikt

    2016-01-01

    Background: Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim: To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods: The distributions of 1-year increments of BMI (kg/m2) and BMI SDS are summarised by...

  1. Learning in Different Modes: The Interaction Between Incremental and Radical Change

    DEFF Research Database (Denmark)

    Petersen, Anders Hedegaard; Boer, Harry; Gertsen, Frank

    2004-01-01

    The objective of the study presented in this article is to contribute to the development of theory on continuous innovation, i.e. the combination of operationally effective exploitation and strategically flexible exploration. A longitudinal case study is presented of the interaction between...... incremental and radical change in Danish company, observed through the lens of organizational learning. The radical change process is described in five phases, each of which had its own effects on incremental change initiatives in the company. The research identified four factors explaining these effects, all...

  2. A program for the numerical control of a pulse increment system

    Energy Technology Data Exchange (ETDEWEB)

    Gray, D.C.

    1963-08-21

    This report will describe the important features of the development of magnetic tapes for the numerical control of a pulse-increment system consisting of a modified Gorton lathe and its associated control unit developed by L. E. Foley of Equipment Development Service, Engineering Services, General Electric Co., Schenectady, N.Y. Included is a description of CUPID (Control and Utilization of Pulse Increment Devices), a FORTRAN program for the design of these tapes on the IBM 7090 computer, and instructions for its operation.

  3. Incremental Approach to the Technology of Test Design for Industrial Projects

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2014-01-01

    Full Text Available The paper presents an approach to effort reduction in developing test suites for industrial software products based on the incremental technology. The main problems to be solved by the incremental technology are full automation design of test scenarios and significant reducing of test explosion. The proposed approach provides solutions to the mentioned problems through joint co-working of a designer and a customer, through the integration of symbolic verification with the automatic generation of test suites; through the usage of an efficient technology with the toolset VRS/TAT.

  4. Incremental electrohydraulic forming - A new approach for the manufacture of structured multifunctional sheet metal blanks

    Science.gov (United States)

    Djakow, Eugen; Springer, Robert; Homberg, Werner; Piper, Mark; Tran, Julian; Zibart, Alexander; Kenig, Eugeny

    2017-10-01

    Electrohydraulic Forming (EHF) processes permit the production of complex, sharp-edged geometries even when high-strength materials are used. Unfortunately, the forming zone is often limited as compared to other sheet metal forming processes. The use of a special industrial-robot-based tool setup and an incremental process strategy could provide a promising solution for this problem. This paper describes such an innovative approach using an electrohydraulic incremental forming machine, which can be employed to manufacture the large multifunctional and complex part geometries in steel, aluminium, magnesium and reinforced plastic that are employed in lightweight constructions or heating elements.

  5. Enthalpy increment measurements of Sr3Zr2O7(s) and Sr4Zr3O10(s)

    International Nuclear Information System (INIS)

    Banerjee, A.; Dash, S.; Prasad, R.; Venugopal, V.

    1998-01-01

    Enthalpy increment measurements on Sr 3 Zr 2 O 7 (s) and Sr 4 Zr 3 O 10 (s) were carried out using a Calvet micro-calorimeter. The enthalpy increment values were least squares analyzed with the constraints that H 0 (T)-H 0 (298.15 K) at 298.15 K equals to zero and C p 0 (298.15 K) equals to the estimated value. The dependence of enthalpy increment with temperature is given. (orig.)

  6. The Trait Emotional Intelligence Questionnaire: Internal Structure, Convergent, Criterion, and Incremental Validity in an Italian Sample

    Science.gov (United States)

    Andrei, Federica; Smith, Martin M.; Surcinelli, Paola; Baldaro, Bruno; Saklofske, Donald H.

    2016-01-01

    This study investigated the structure and validity of the Italian translation of the Trait Emotional Intelligence Questionnaire. Data were self-reported from 227 participants. Confirmatory factor analysis supported the four-factor structure of the scale. Hierarchical regressions also demonstrated its incremental validity beyond demographics, the…

  7. One Size Does Not Fit All: Managing Radical and Incremental Creativity

    Science.gov (United States)

    Gilson, Lucy L.; Lim, Hyoun Sook; D'Innocenzo, Lauren; Moye, Neta

    2012-01-01

    This research extends creativity theory by re-conceptualizing creativity as a two-dimensional construct (radical and incremental) and examining the differential effects of intrinsic motivation, extrinsic rewards, and supportive supervision on perceptions of creativity. We hypothesize and find two distinct types of creativity that are associated…

  8. Relating annual increments of the endangered Blanding's turtle plastron growth to climate.

    Science.gov (United States)

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-05-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration.

  9. An Empirical Analysis of Incremental Capital Structure Decisions Under Managerial Entrenchment

    NARCIS (Netherlands)

    de Jong, A.; Veld, C.H.

    1998-01-01

    We study incremental capital structure decisions of Dutch companies. From 1977 to 1996 these companies have made 110 issues of public and private seasoned equity and 137 public issues of straight debt. Managers of Dutch companies are entrenched. For this reason a discrepancy exists between

  10. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    Science.gov (United States)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  11. BMI and BMI SDS in childhood: annual increments and conditional change.

    Science.gov (United States)

    Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt

    2017-02-01

    Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.

  12. Investigating Postgraduate College Admission Interviews: Generalizability Theory Reliability and Incremental Predictive Validity

    Science.gov (United States)

    Arce-Ferrer, Alvaro J.; Castillo, Irene Borges

    2007-01-01

    The use of face-to-face interviews is controversial for college admissions decisions in light of the lack of availability of validity and reliability evidence for most college admission processes. This study investigated reliability and incremental predictive validity of a face-to-face postgraduate college admission interview with a sample of…

  13. Software designs of image processing tasks with incremental refinement of computation.

    Science.gov (United States)

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  14. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    International Nuclear Information System (INIS)

    Mabit, L.; Toloza, A.; Meusburger, K.; Alewell, C.; Iurian, A-R.; Owens, P.N.

    2014-01-01

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”

  15. Per tree estimates with n-tree distance sampling: an application to increment core data

    Science.gov (United States)

    Thomas B. Lynch; Robert F. Wittwer

    2002-01-01

    Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...

  16. Literature Review of Data on the Incremental Costs to Design and Build Low-Energy Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, W. D.

    2008-05-14

    This document summarizes findings from a literature review into the incremental costs associated with low-energy buildings. The goal of this work is to help establish as firm an analytical foundation as possible for the Building Technology Program's cost-effective net-zero energy goal in the year 2025.

  17. Analogical reasoning: An incremental or insightful process? What cognitive and cortical evidence suggests.

    Science.gov (United States)

    Antonietti, Alessandro; Balconi, Michela

    2010-06-01

    Abstract The step-by-step, incremental nature of analogical reasoning can be questioned, since analogy making appears to be an insight-like process. This alternative view of analogical thinking can be integrated in Speed's model, even though the alleged role played by dopaminergic subcortical circuits needs further supporting evidence.

  18. A diameter increment model for Red Fir in California and Southern Oregon

    Science.gov (United States)

    K. Leroy Dolph

    1992-01-01

    Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....

  19. Is It that Difficult to Find a Good Preference Order for the Incremental Algorithm?

    Science.gov (United States)

    Krahmer, Emiel; Koolen, Ruud; Theune, Mariet

    2012-01-01

    In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…

  20. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    Energy Technology Data Exchange (ETDEWEB)

    Mabit, L.; Toloza, A. [Soil and Water Management and Crop Nutrition Laboratory, IAEA, Seibersdorf (Austria); Meusburger, K.; Alewell, C. [Environmental Geosciences, Department of Environmental Sciences, University of Basel, Basel (Switzerland); Iurian, A-R. [Babes-Bolyai University, Faculty of Environmental Science and Engineering, Cluj-Napoca (Romania); Owens, P. N. [Environmental Science Program and Quesnel River Research Centre, University of Northern British Columbia, Prince George, British Columbia (Canada)

    2014-07-15

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”.

  1. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    Science.gov (United States)

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  2. Diagnostic value of triphasic incremental helical CT in early and progressive gastric carcinoma

    International Nuclear Information System (INIS)

    Gao Jianbo; Yan Xuehua; Li Mengtai; Guo Hua; Chen Xuejun; Guan Sheng; Zhang Xiefu; Li Shuxin; Yang Xiaopeng

    2001-01-01

    Objective: To investigate helical CT enhancement characteristics of gastric carcinoma, and the diagnostic value and preoperative staging of gastric carcinoma with triphasic incremental helical CT of the stomach with water-filling method. Methods: Both double-contrast barium examination and triphasic incremental helical CT of the stomach with water-filling method were performed in 46 patients with gastric carcinoma. Results: (1) Among these patients, normal gastric wall exhibited one layered structure in 18 patients, two or three layered structure in 28 patients in the arterial and portal venous phase. (2) Two cases of early stomach cancer showed marked enhancement in the arterial and portal venous phase and obvious attenuation of enhancement in the equilibrium phase. On the contrary, 32 of the 44 advanced gastric carcinoma was showed marked enhancement in the venous phase compared with the arterial phase ( t = 4.226, P < 0.05). (3) The total accuracy of triphasic incremental helical CT in determining TNM-staging was 81.0%. Conclusion: Different types of gastric carcinoma have different enhancement features. Triphases incremental helical CT is more accurate than conventional CT in the preoperative staging of gastric carcinoma

  3. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    Science.gov (United States)

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  4. Lead 210 and moss-increment dating of two Finnish Sphagnum hummocks

    International Nuclear Information System (INIS)

    El-Daoushy, F.

    1982-01-01

    A comparison is presented of 210 Pb dating data with mass-increment dates of selected peat material from Finland. The measurements of 210 Pb were carried out by determining the granddaughter product 210 Po by means of the isotope dilution. The ages in 210 Pb yr were calculated using the constant initial concentration and the constant rate of supply models. (U.K.)

  5. Successive 1-Month Weight Increments in Infancy Can Be Used to Screen for Faltering Linear Growth.

    Science.gov (United States)

    Onyango, Adelheid W; Borghi, Elaine; de Onis, Mercedes; Frongillo, Edward A; Victora, Cesar G; Dewey, Kathryn G; Lartey, Anna; Bhandari, Nita; Baerug, Anne; Garza, Cutberto

    2015-12-01

    Linear growth faltering in the first 2 y contributes greatly to a high stunting burden, and prevention is hampered by the limited capacity in primary health care for timely screening and intervention. This study aimed to determine an approach to predicting long-term stunting from consecutive 1-mo weight increments in the first year of life. By using the reference sample of the WHO velocity standards, the analysis explored patterns of consecutive monthly weight increments among healthy infants. Four candidate screening thresholds of successive increments that could predict stunting were considered, and one was selected for further testing. The selected threshold was applied in a cohort of Bangladeshi infants to assess its predictive value for stunting at ages 12 and 24 mo. Between birth and age 12 mo, 72.6% of infants in the WHO sample tracked within 1 SD of their weight and length. The selected screening criterion ("event") was 2 consecutive monthly increments below the 15th percentile. Bangladeshi infants were born relatively small and, on average, tracked downward from approximately age 6 to strategy is effective, the estimated preventable proportion in the group who experienced the event would be 34% at 12 mo and 24% at 24 mo. This analysis offers an approach for frontline workers to identify children at risk of stunting, allowing for timely initiation of preventive measures. It opens avenues for further investigation into evidence-informed application of the WHO growth velocity standards. © 2015 American Society for Nutrition.

  6. Raising Cervical Cancer Awareness: Analysing the Incremental Efficacy of Short Message Service

    Science.gov (United States)

    Lemos, Marina Serra; Rothes, Inês Areal; Oliveira, Filipa; Soares, Luisa

    2017-01-01

    Objective: To evaluate the incremental efficacy of a Short Message Service (SMS) combined with a brief video intervention in increasing the effects of a health education intervention for cervical cancer prevention, over and beyond a video-alone intervention, with respect to key determinants of health behaviour change--knowledge, motivation and…

  7. The Interpersonal Measure of Psychopathy: Construct and Incremental Validity in Male Prisoners

    Science.gov (United States)

    Zolondek, Stacey; Lilienfeld, Scott O.; Patrick, Christopher J.; Fowler, Katherine A.

    2006-01-01

    The authors examined the construct and incremental validity of the Interpersonal Measure of Psychopathy (IM-P), a relatively new instrument designed to detect interpersonal behaviors associated with psychopathy. Observers of videotaped Psychopathy Checklist-Revised (PCL-R) interviews rated male prisoners (N = 93) on the IM-P. The IM-P correlated…

  8. Between structures and norms : Assessing tax increment financing for the Dutch spatial planning toolkit

    NARCIS (Netherlands)

    Root, Liz; Van Der Krabben, Erwin; Spit, Tejo

    2015-01-01

    The aim of the paper is to assess the institutional (mis)fit of tax increment financing for the Dutch spatial planning financial toolkit. By applying an institutionally oriented assessment framework, we analyse the interconnectivity of Dutch municipal finance and spatial planning structures and

  9. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    Science.gov (United States)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  10. On critical cases in limit theory for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Podolskij, Mark

    averages. The limit theory heavily depends on the interplay between the given order of the increments, the considered power, the Blumenthal-Getoor index of the driving pure jump Lévy process L and the behavior of the kernel function g at 0. In this work we will study the critical cases, which were...

  11. TCAM-based High Speed Longest Prefix Matching with Fast Incremental Table Updates

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Kragelund, A.; Berger, Michael Stübert

    2013-01-01

    and consequently a higher throughput of the network search engine, since the TCAM down time caused by incremental updates is eliminated. The LPM scheme is described in HDL for FPGA implementation and compared to an existing scheme for customized CAM circuits. The paper shows that the proposed scheme can process...

  12. Differentiating Major and Incremental New Product Development: The Effects of Functional and Numerical Workforce Flexibility

    NARCIS (Netherlands)

    Kok, R.A.W.; Ligthart, P.E.M.

    2014-01-01

    This study seeks to explain the differential effects of workforce flexibility on incremental and major new product development (NPD). Drawing on the resource-based theory of the firm, human resource management research, and innovation management literature, the authors distinguish two types of

  13. Real Time Implementation of Incremental Fuzzy Logic Controller for Gas Pipeline Corrosion Control

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Jayapalan

    2014-01-01

    Full Text Available A robust virtual instrumentation based fuzzy incremental corrosion controller is presented to protect metallic gas pipelines. Controller output depends on error and change in error of the controlled variable. For corrosion control purpose pipe to soil potential is considered as process variable. The proposed fuzzy incremental controller is designed using a very simple control rule base and the most natural and unbiased membership functions. The proposed scheme is tested for a wide range of pipe to soil potential control. Performance comparison between the conventional proportional integral type and proposed fuzzy incremental controller is made in terms of several performance criteria such as peak overshoot, settling time, and rise time. Result shows that the proposed controller outperforms its conventional counterpart in each case. Designed controller can be taken in automode without waiting for initial polarization to stabilize. Initial startup curve of proportional integral controller and fuzzy incremental controller is reported. This controller can be used to protect any metallic structures such as pipelines, tanks, concrete structures, ship, and offshore structures.

  14. On the Perturb-and-Observe and Incremental Conductance MPPT methods for PV systems

    DEFF Research Database (Denmark)

    Sera, Dezso; Mathe, Laszlo; Kerekes, Tamas

    2013-01-01

    This paper presents a detailed analysis of the two most well-known hill-climbing MPPT algorithms, the Perturb-and-Observe (P&O) and Incremental Conductance (INC). The purpose of the analysis is to clarify some common misconceptions in the literature regarding these two trackers, therefore helping...

  15. Gradient nanostructured surface of a Cu plate processed by incremental frictional sliding

    DEFF Research Database (Denmark)

    Hong, Chuanshi; Huang, Xiaoxu; Hansen, Niels

    2015-01-01

    The flat surface of a Cu plate was processed by incremental frictional sliding at liquid nitrogen temperature. The surface treatment results in a hardened gradient surface layer as thick as 1 mm in the Cu plate, which contains a nanostructured layer on the top with a boundary spacing of the order...

  16. A gradient surface produced by combined electroplating and incremental frictional sliding

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hong, Chuanshi; Kitamura, K.

    2017-01-01

    A Cu plate was first electroplated with a Ni layer, with a thickness controlled to be between 1 and 2 mu m. The coated surface was then deformed by incremental frictional sliding with liquid nitrogen cooling. The combined treatment led to a multifunctional surface with a gradient in strain...

  17. The effects of the pine processionary moth on the increment of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-05-18

    May 18, 2009 ... sycophanta L. (Coleoptera: Carabidae) used against the pine processionary moth (Thaumetopoea pityocampa Den. & Schiff.) (Lepidoptera: Thaumetopoeidae) in biological control. T. J. Zool. 30:181-185. Kanat M, Sivrikaya F (2005). Effect of the pine processionary moth on diameter increment of Calabrian ...

  18. Incrementally Detecting Change Types of Spatial Area Object: A Hierarchical Matching Method Considering Change Process

    Directory of Open Access Journals (Sweden)

    Yanhui Wang

    2018-01-01

    Full Text Available Detecting and extracting the change types of spatial area objects can track area objects’ spatiotemporal change pattern and provide the change backtracking mechanism for incrementally updating spatial datasets. To respond to the problems of high complexity of detection methods, high redundancy rate of detection factors, and the low automation degree during incrementally update process, we take into account the change process of area objects in an integrated way and propose a hierarchical matching method to detect the nine types of changes of area objects, while minimizing the complexity of the algorithm and the redundancy rate of detection factors. We illustrate in details the identification, extraction, and database entry of change types, and how we achieve a close connection and organic coupling of incremental information extraction and object type-of-change detection so as to characterize the whole change process. The experimental results show that this method can successfully detect incremental information about area objects in practical applications, with the overall accuracy reaching above 90%, which is much higher than the existing weighted matching method, making it quite feasible and applicable. It helps establish the corresponding relation between new-version and old-version objects, and facilitate the linked update processing and quality control of spatial data.

  19. Combining Compact Representation and Incremental Generation in Large Games with Sequential Strategies

    DEFF Research Database (Denmark)

    Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind

    2015-01-01

    representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...

  20. A power-driven increment borer for sampling high-density tropical wood

    Czech Academy of Sciences Publication Activity Database

    Krottenthaler, S.; Pitsch, P.; Helle, G.; Locosselli, G. M.; Ceccantini, G.; Altman, Jan; Svoboda, M.; Doležal, Jiří; Schleser, G.; Anhuf, D.

    2015-01-01

    Roč. 36, November (2015), s. 40-44 ISSN 1125-7865 R&D Projects: GA ČR GAP504/12/1952; GA ČR(CZ) GA14-12262S Institutional support: RVO:67985939 Keywords : tropical dendrochronology * tree sampling methods * increment cores Subject RIV: EF - Botanics Impact factor: 2.107, year: 2015

  1. Substructuring in the implicit simulation of single point incremental sheet forming

    NARCIS (Netherlands)

    Hadoush, A.; van den Boogaard, Antonius H.

    2009-01-01

    This paper presents a direct substructuring method to reduce the computing time of implicit simulations of single point incremental forming (SPIF). Substructuring is used to divide the finite element (FE) mesh into several non-overlapping parts. Based on the hypothesis that plastic deformation is

  2. Incremental Validity of the WJ III COG: Limited Predictive Effects beyond the GIA-E

    Science.gov (United States)

    McGill, Ryan J.; Busse, R. T.

    2015-01-01

    This study is an examination of the incremental validity of Cattell-Horn-Carroll (CHC) broad clusters from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ III COG) for predicting scores on the Woodcock-Johnson III Tests of Achievement (WJ III ACH). The participants were children and adolescents, ages 6-18 (n = 4,722), drawn from the WJ…

  3. Feasibility of Incremental 2-Times Weekly Hemodialysis in Incident Patients With Residual Kidney Function

    Directory of Open Access Journals (Sweden)

    Andrew I. Chin

    2017-09-01

    Discussion: More than 50% of incident HD patients with RKF have adequate kidney urea clearance to be considered for 2-times weekly HD. When additionally ultrafiltration volume and blood pressure stability are taken into account, more than one-fourth of the total cohort could optimally start HD in an incremental fashion.

  4. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Science.gov (United States)

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. 29 CFR 825.205 - Increments of FMLA leave for intermittent or reduced schedule leave.

    Science.gov (United States)

    2010-07-01

    ... intermittent leave or working a reduced leave schedule to commence or end work mid-way through a shift, such as... per week, but works only 20 hours a week under a reduced leave schedule, the employee's ten hours of... 29 Labor 3 2010-07-01 2010-07-01 false Increments of FMLA leave for intermittent or reduced...

  6. [Spatiotemporal variation of Populus euphratica's radial increment at lower reaches of Tarim River after ecological water transfer].

    Science.gov (United States)

    An, Hong-Yan; Xu, Hai-Liang; Ye, Mao; Yu, Pu-Ji; Gong, Jun-Jun

    2011-01-01

    Taking the Populus euphratica at lower reaches of Tarim River as test object, and by the methods of tree dendrohydrology, this paper studied the spatiotemporal variation of P. euphratic' s branch radial increment after ecological water transfer. There was a significant difference in the mean radial increment before and after ecological water transfer. The radial increment after the eco-water transfer was increased by 125%, compared with that before the water transfer. During the period of ecological water transfer, the radial increment was increased with increasing water transfer quantity, and there was a positive correlation between the annual radial increment and the total water transfer quantity (R2 = 0.394), suggesting that the radial increment of P. euphratica could be taken as the performance indicator of ecological water transfer. After the ecological water transfer, the radial increment changed greatly with the distance to the River, i.e. , decreased significantly along with the increasing distance to the River (P = 0.007). The P. euphratic' s branch radial increment also differed with stream segment (P = 0.017 ), i.e. , the closer to the head-water point (Daxihaizi Reservoir), the greater the branch radial increment. It was considered that the limited effect of the current ecological water transfer could scarcely change the continually deteriorating situation of the lower reaches of Tarim River.

  7. A maximal incremental effort alters tear osmolarity depending on the fitness level in military helicopter pilots.

    Science.gov (United States)

    Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David

    2017-10-01

    Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Towards Reliable and Energy-Efficient Incremental Cooperative Communication for Wireless Body Area Networks.

    Science.gov (United States)

    Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor

    2016-02-24

    In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.

  9. A design of LED adaptive dimming lighting system based on incremental PID controller

    Science.gov (United States)

    He, Xiangyan; Xiao, Zexin; He, Shaojia

    2010-11-01

    As a new generation energy-saving lighting source, LED is applied widely in various technology and industry fields. The requirement of its adaptive lighting technology is more and more rigorous, especially in the automatic on-line detecting system. In this paper, a closed loop feedback LED adaptive dimming lighting system based on incremental PID controller is designed, which consists of MEGA16 chip as a Micro-controller Unit (MCU), the ambient light sensor BH1750 chip with Inter-Integrated Circuit (I2C), and constant-current driving circuit. A given value of light intensity required for the on-line detecting environment need to be saved to the register of MCU. The optical intensity, detected by BH1750 chip in real time, is converted to digital signal by AD converter of the BH1750 chip, and then transmitted to MEGA16 chip through I2C serial bus. Since the variation law of light intensity in the on-line detecting environment is usually not easy to be established, incremental Proportional-Integral-Differential (PID) algorithm is applied in this system. Control variable obtained by the incremental PID determines duty cycle of Pulse-Width Modulation (PWM). Consequently, LED's forward current is adjusted by PWM, and the luminous intensity of the detection environment is stabilized by self-adaptation. The coefficients of incremental PID are obtained respectively after experiments. Compared with the traditional LED dimming system, it has advantages of anti-interference, simple construction, fast response, and high stability by the use of incremental PID algorithm and BH1750 chip with I2C serial bus. Therefore, it is suitable for the adaptive on-line detecting applications.

  10. Endogenous-cue prospective memory involving incremental updating of working memory: an fMRI study.

    Science.gov (United States)

    Halahalli, Harsha N; John, John P; Lukose, Ammu; Jain, Sanjeev; Kutty, Bindu M

    2015-11-01

    Prospective memory paradigms are conventionally classified on the basis of event-, time-, or activity-based intention retrieval. In the vast majority of such paradigms, intention retrieval is provoked by some kind of external event. However, prospective memory retrieval cues that prompt intention retrieval in everyday life are commonly endogenous, i.e., linked to a specific imagined retrieval context. We describe herein a novel prospective memory paradigm wherein the endogenous cue is generated by incremental updating of working memory, and investigated the hemodynamic correlates of this task. Eighteen healthy adult volunteers underwent functional magnetic resonance imaging while they performed a prospective memory task where the delayed intention was triggered by an endogenous cue generated by incremental updating of working memory. Working memory and ongoing task control conditions were also administered. The 'endogenous-cue prospective memory condition' with incremental working memory updating was associated with maximum activations in the right rostral prefrontal cortex, and additional activations in the brain regions that constitute the bilateral fronto-parietal network, central and dorsal salience networks as well as cerebellum. In the working memory control condition, maximal activations were noted in the left dorsal anterior insula. Activation of the bilateral dorsal anterior insula, a component of the central salience network, was found to be unique to this 'endogenous-cue prospective memory task' in comparison to previously reported exogenous- and endogenous-cue prospective memory tasks without incremental working memory updating. Thus, the findings of the present study highlight the important role played by the dorsal anterior insula in incremental working memory updating that is integral to our endogenous-cue prospective memory task.

  11. Influence of increment thickness on dentin bond strength and light transmission of composite base materials.

    Science.gov (United States)

    Omran, Tarek A; Garoushi, Sufyan; Abdulmajeed, Aous A; Lassila, Lippo V; Vallittu, Pekka K

    2017-06-01

    Bulk-fill resin composites (BFCs) are gaining popularity in restorative dentistry due to the reduced chair time and ease of application. This study aimed to evaluate the influence of increment thickness on dentin bond strength and light transmission of different BFCs and a new discontinuous fiber-reinforced composite. One hundred eighty extracted sound human molars were prepared for a shear bond strength (SBS) test. The teeth were divided into four groups (n = 45) according to the resin composite used: regular particulate filler resin composite: (1) G-ænial Anterior [GA] (control); bulk-fill resin composites: (2) Tetric EvoCeram Bulk Fill [TEBF] and (3) SDR; and discontinuous fiber-reinforced composite: (4) everX Posterior [EXP]. Each group was subdivided according to increment thickness (2, 4, and 6 mm). The irradiance power through the material of all groups/subgroups was quantified (MARC® Resin Calibrator; BlueLight Analytics Inc.). Data were analyzed using two-way ANOVA followed by Tukey's post hoc test. SBS and light irradiance decreased as the increment's height increased (p composite used. EXP presented the highest SBS in 2- and 4-mm-thick increments when compared to other composites, although the differences were not statistically significant (p > 0.05). Light irradiance mean values arranged in descending order were (p composites. Discontinuous fiber-reinforced composite showed the highest value of curing light transmission, which was also seen in improved bonding strength to the underlying dentin surface. Discontinuous fiber-reinforced composite can be applied safely in bulks of 4-mm increments same as other bulk-fill composites, although, in 2-mm thickness, the investigated composites showed better performance.

  12. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    International Nuclear Information System (INIS)

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-01-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N 2 ) multiplications in contrast to O(N 2 ) and O(N 3 ) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested

  13. Potential stocks and increments of woody biomass in the European Union under different management and climate scenarios.

    Science.gov (United States)

    Kindermann, Georg E; Schörghuber, Stefan; Linkosalo, Tapio; Sanchez, Anabel; Rammer, Werner; Seidl, Rupert; Lexer, Manfred J

    2013-02-01

    Forests play an important role in the global carbon flow. They can store carbon and can also provide wood which can substitute other materials. In EU27 the standing biomass is steadily increasing. Increments and harvests seem to have reached a plateau between 2005 and 2010. One reason for reaching this plateau will be the circumstance that the forests are getting older. High ages have the advantage that they typical show high carbon concentration and the disadvantage that the increment rates are decreasing. It should be investigated how biomass stock, harvests and increments will develop under different climate scenarios and two management scenarios where one is forcing to store high biomass amounts in forests and the other tries to have high increment rates and much harvested wood. A management which is maximising standing biomass will raise the stem wood carbon stocks from 30 tC/ha to 50 tC/ha until 2100. A management which is maximising increments will lower the stock to 20 tC/ha until 2100. The estimates for the climate scenarios A1b, B1 and E1 are different but there is much more effect by the management target than by the climate scenario. By maximising increments the harvests are 0.4 tC/ha/year higher than in the management which maximises the standing biomass. The increments until 2040 are close together but around 2100 the increments when maximising standing biomass are approximately 50 % lower than those when maximising increments. Cold regions will benefit from the climate changes in the climate scenarios by showing higher increments. The results of this study suggest that forest management should maximise increments, not stocks to be more efficient in sense of climate change mitigation. This is true especially for regions which have already high carbon stocks in forests, what is the case in many regions in Europe. During the time span 2010-2100 the forests of EU27 will absorb additional 1750 million tC if they are managed to maximise increments compared

  14. Ductility, strength and hardness relation after prior incremental deformation (ratcheting) of austenitic steel

    International Nuclear Information System (INIS)

    Kussmaul, K.; Diem, H.K.; Wachter, O.

    1993-01-01

    Experimental investigations into the stress/strain behavior of the niobium stabilized austenitic material with the German notation X6 CrNiNb 18 10 proved that a limited incrementally applied prior deformation will reduce the total deformation capability only by the amount of the prior deformation. It could especially be determined on the little changes in the reduction of area that the basically ductile deformation behavior will not be changed by the type of the prior loading. There is a correlation between the amount of deformation and the increase in hardness. It is possible to correlate both the changes in hardness and the material properties. In the case of low cycle fatigue tests with alternating temperature an incremental increase in total strain (ratcheting) was noted to depend on the strain range applied

  15. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  16. Influence of part orientation on the geometric accuracy in robot-based incremental sheet metal forming

    Science.gov (United States)

    Störkle, Denis Daniel; Seim, Patrick; Thyssen, Lars; Kuhlenkötter, Bernd

    2016-10-01

    This article describes new developments in an incremental, robot-based sheet metal forming process (`Roboforming') for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet metal forming (ISF) machines, this system offers high geometrical form flexibility without the need of any part-dependent tools. The industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors present the influence of the part orientation and the forming sequence on the geometric accuracy. Their influence is illustrated with the help of various experimental results shown and interpreted within this article.

  17. A simple extension of contraction theory to study incremental stability properties

    DEFF Research Database (Denmark)

    Jouffroy, Jerome

    Contraction theory is a recent tool enabling to study the stability of nonlinear systems trajectories with respect to one another, and therefore belongs to the class of incremental stability methods. In this paper, we extend the original definition of contraction theory to incorporate...... in an explicit manner the control input of the considered system. Such an extension, called universal contraction, is quite analogous in spirit to the well-known Input-to-State Stability (ISS). It serves as a simple formulation of incremental ISS, external stability, and detectability in a differential setting....... The hierarchical combination result of contraction theory is restated in this framework, and a differential small-gain theorem is derived from results already available in Lyapunov theory....

  18. Job Embeddedness Demonstrates Incremental Validity When Predicting Turnover Intentions for Australian University Employees

    Science.gov (United States)

    Heritage, Brody; Gilbert, Jessica M.; Roberts, Lynne D.

    2016-01-01

    Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention. PMID:27199817

  19. An approach to eliminate stepped features in multistage incremental sheet forming process: Experimental and FEA analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nirala, Harish Kumar; Jain, Prashant K.; Tandon, Puneet [PDPM Indian Institute of Information Technology, Design and Manufacturing Jabalpur Jabalpur-482005, Madhya Pradesh (India); Roy, J. J.; Samal, M. K. [Bhabha Atomic Research Centre, Mumbai (India)

    2017-02-15

    Incremental sheet forming (ISF) is a recently developed manufacturing technique. In ISF, forming is done by applying deformation force through the motion of Numerically controlled (NC) single point forming tool on the clamped sheet metal blank. Single Point Incremental sheet forming (SPISF) is also known as a die-less forming process because no die is required to fabricate any component by using this process. Now a day it is widely accepted for rapid manufacturing of sheet metal components. The formability of SPISF process improves by adding some intermediate stages into it, which is known as Multi-stage SPISF (MSPISF) process. However during forming in MSPISF process because of intermediate stages stepped features are generated. This paper investigates the generation of stepped features with simulation and experimental results. An effective MSPISF strategy is proposed to remove or eliminate this generated undesirable stepped features.

  20. Enthalpy increment measurements of NaCrO2 using a high temperature Calvet calorimeter

    International Nuclear Information System (INIS)

    Iyer, V.S.; Jayanthi, K.; Ramarao, G.A.; Venugopal, V.; Sood, D.D.

    1991-01-01

    Enthalpy increment measurements on NaCrO 2 (s) were carried out in the temperature range 323 to 839 K using a high temperature Calvet micro calorimeter. The enthalpy increment values were least-squares fitted with temperature with the constraint that (Hdeg T - Hdeg 298 ) at 298.18 K equals zero, and can be given by: (Hdeg T - Hdeg 298 ) J/mol) ± 336 = -23515 + 75.364T(K) + 0.01256T 2 (K) (323 to 839 K). The first differential of the above equation with temperature gives the constant pressure molar heat capacity of NaCrO 2 (s), which is given by: Cdeg p (NaCrO 2 , s, T) (J/K mol) = 75.364 + 0.02512T(K). The thermal properties of NaCrO 2 (s) were calculated using the molar heat capacities from the present study and Sdeg(298 K) from the literature. (orig.)

  1. Accounting for between-study variation in incremental net benefit in value of information methodology.

    Science.gov (United States)

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Are Fearless Dominance Traits Superfluous in Operationalizing Psychopathy? Incremental Validity and Sex Differences

    Science.gov (United States)

    Murphy, Brett; Lilienfeld, Scott; Skeem, Jennifer; Edens, John

    2016-01-01

    Researchers are vigorously debating whether psychopathic personality includes seemingly adaptive traits, especially social and physical boldness. In a large sample (N=1565) of adult offenders, we examined the incremental validity of two operationalizations of boldness (Fearless Dominance traits in the Psychopathy Personality Inventory, Lilienfeld & Andrews, 1996; Boldness traits in the Triarchic Model of Psychopathy, Patrick et al, 2009), above and beyond other characteristics of psychopathy, in statistically predicting scores on four psychopathy-related measures, including the Psychopathy Checklist-Revised (PCL-R). The incremental validity added by boldness traits in predicting the PCL-R’s representation of psychopathy was especially pronounced for interpersonal traits (e.g., superficial charm, deceitfulness). Our analyses, however, revealed unexpected sex differences in the relevance of these traits to psychopathy, with boldness traits exhibiting reduced importance for psychopathy in women. We discuss the implications of these findings for measurement models of psychopathy. PMID:26866795

  3. Sufficient conditions for a period incrementing big bang bifurcation in one-dimensional maps

    International Nuclear Information System (INIS)

    Avrutin, V; Granados, A; Schanz, M

    2011-01-01

    Typically, big bang bifurcation occurs for one (or higher)-dimensional piecewise-defined discontinuous systems whenever two border collision bifurcation curves collide transversely in the parameter space. At that point, two (feasible) fixed points collide with one boundary in state space and become virtual, and, in the one-dimensional case, the map becomes continuous. Depending on the properties of the map near the codimension-two bifurcation point, there exist different scenarios regarding how the infinite number of periodic orbits are born, mainly the so-called period adding and period incrementing. In our work we prove that, in order to undergo a big bang bifurcation of the period incrementing type, it is sufficient for a piecewise-defined one-dimensional map that the colliding fixed points are attractive and with associated eigenvalues of different signs

  4. Improved incremental conductance method for maximum power point tracking using cuk converter

    Directory of Open Access Journals (Sweden)

    M. Saad Saoud

    2014-03-01

    Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.

  5. The incremental role of trait emotional intelligence on perceived cervical screening barriers.

    Science.gov (United States)

    Costa, Sebastiano; Barberis, Nadia; Larcan, Rosalba; Cuzzocrea, Francesca

    2018-02-13

    Researchers have become increasingly interested in investigating the role of the psychological aspects related to the perception of cervical screening barriers. This study investigates the influence of trait EI on perceived cervical screening barriers. Furthermore, this study investigates the incremental validity of trait EI beyond the Big Five, as well as emotion regulation in the perceived barrier towards the Pap test as revealed in a sample of 206 Italian women that were undergoing cervical screening. Results have shown that trait EI is negatively related to cervical screening barriers. Furthermore, trait EI can be considered as a strong incremental predictor of a woman's perception of screening over and above the Big Five, emotion regulation, age, sexual intercourse experience and past Pap test. Detailed information on the study findings and future research directions are discussed.

  6. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  7. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero, E-mail: piero.baraldi@polimi.i [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-04-15

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  8. Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.

    Science.gov (United States)

    Dalessandro, Brian; Perlich, Claudia; Raeder, Troy

    2014-06-01

    Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.

  9. Optimization of the single point incremental forming process for titanium sheets by using response surface

    Directory of Open Access Journals (Sweden)

    Saidi Badreddine

    2016-01-01

    Full Text Available The single point incremental forming process is well-known to be perfectly suited for prototyping and small series. One of its fields of applicability is the medicine area for the forming of titanium prostheses or titanium medical implants. However this process is not yet very industrialized, mainly due its geometrical inaccuracy, its not homogeneous thickness distribution& Moreover considerable forces can occur. They must be controlled in order to preserve the tooling. In this paper, a numerical approach is proposed in order to minimize the maximum force achieved during the incremental forming of titanium sheets and to maximize the minimal thickness. A surface response methodology is used to find the optimal values of two input parameters of the process, the punch diameter and the vertical step size of the tool path.

  10. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  11. Statistics of a mixed Eulerian-Lagrangian velocity increment in fully developed turbulence

    International Nuclear Information System (INIS)

    Friedrich, R; Kamps, O; Grauer, R; Homann, H

    2009-01-01

    We investigate the relationship between Eulerian and Lagrangian probability density functions obtained from numerical simulations of two-dimensional as well as three-dimensional turbulence. We show that in contrast to the structure functions of the Lagrangian velocity increment δ τ v(y)=u(x(y, τ), τ)- u(y, 0), where u(x, t) denotes the Eulerian velocity and x(y, t) the particle path initially starting at x(y, 0)=y, the structure functions of the velocity increment δ τ w(y)=u(x(y, τ), τ)- u(y, τ) exhibit a wide range of scaling behavior. Similar scaling indices are detected for the structure functions for particles diffusing in frozen turbulent fields. Furthermore, we discuss a connection to the scaling of Eulerian transversal structure functions.

  12. Weighted tunable clustering in local-world networks with increment behavior

    International Nuclear Information System (INIS)

    Ma, Ying-Hong; Li, Huijia; Zhang, Xiao-Dong

    2010-01-01

    Since some realistic networks are influenced not only by increment behavior but also by the tunable clustering mechanism with new nodes to be added to networks, it is interesting to characterize the model for those actual networks. In this paper, a weighted local-world model, which incorporates increment behavior and the tunable clustering mechanism, is proposed and its properties are investigated, such as degree distribution and clustering coefficient. Numerical simulations are fitted to the model and also display good right-skewed scale-free properties. Furthermore, the correlation of vertices in our model is studied which shows the assortative property. The epidemic spreading process by weighted transmission rate on the model shows that the tunable clustering behavior has a great impact on the epidemic dynamic

  13. Sufficient conditions for a period incrementing big bang bifurcation in one-dimensional maps

    Science.gov (United States)

    Avrutin, V.; Granados, A.; Schanz, M.

    2011-09-01

    Typically, big bang bifurcation occurs for one (or higher)-dimensional piecewise-defined discontinuous systems whenever two border collision bifurcation curves collide transversely in the parameter space. At that point, two (feasible) fixed points collide with one boundary in state space and become virtual, and, in the one-dimensional case, the map becomes continuous. Depending on the properties of the map near the codimension-two bifurcation point, there exist different scenarios regarding how the infinite number of periodic orbits are born, mainly the so-called period adding and period incrementing. In our work we prove that, in order to undergo a big bang bifurcation of the period incrementing type, it is sufficient for a piecewise-defined one-dimensional map that the colliding fixed points are attractive and with associated eigenvalues of different signs.

  14. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    International Nuclear Information System (INIS)

    Baraldi, Piero; Razavi-Far, Roozbeh; Zio, Enrico

    2011-01-01

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  15. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  16. The scope of application of incremental rapid prototyping methods in foundry engineering

    Directory of Open Access Journals (Sweden)

    M. Stankiewicz

    2010-01-01

    Full Text Available The article presents the scope of application of selected incremental Rapid Prototyping methods in the process of manufacturing casting models, casting moulds and casts. The Rapid Prototyping methods (SL, SLA, FDM, 3DP, JS are predominantly used for the production of models and model sets for casting moulds. The Rapid Tooling methods, such as: ZCast-3DP, ProMetalRCT and VoxelJet, enable the fabrication of casting moulds in the incremental process. The application of the RP methods in cast production makes it possible to speed up the prototype preparation process. This is particularly vital to elements of complex shapes. The time required for the manufacture of the model, the mould and the cast proper may vary from a few to several dozen hours.

  17. Validade incremental do Zulliger e do Pfister no contexto da toxicomania Validez incremental del Zulliger y del Pfister en el contexto de la toxicomania Incremental validity of the Zulliger and Pfister in the contexto of drug addiction

    Directory of Open Access Journals (Sweden)

    Renata da Rocha Campos Franco

    2012-04-01

    Full Text Available O objetivo desta pesquisa foi verificar a validade incremental de duas técnicas projetivas, a partir da compreensão da personalidade de 20 dependentes químicos. Uma avaliação da personalidade de dez brasileiros adictos do álcool e dez franceses dependentes da heroína, que estavam vivenciando o processo do tratamento da desintoxicação da droga em centros especializados no Brasil ou na França, foi feita pela perspectiva da psicopatologia fenômeno-estrutural, que é uma abordagem teórica de origem francesa que compreende o funcionamento mental do indivíduo a partir do modo de viver o tempo e o espaço. A relação espaço-temporal e a personalidade foram compreendidas pela maneira como as pessoas se expressaram verbalmente e pelo modo como viram e construíram as imagens das técnicas projetivas de Zulliger e Pfister. Os resultados demonstraram coerência entre as informações geradas pelos instrumentos, provando que o Zulliger e o Pfister, quando aplicados de forma associada e interpretados pelo método fenômeno-estrutural, mostraram-se eficientes para conhecer as vivências de espaço e tempo dos sujeitos.El objetivo de esta investigación fue verificar la validez incremental de dos técnicas proyectivas, a partir de la comprensión de la personalidad de 20 dependientes químicos. Una evaluación de la personalidad de diez brasileños adictos del alcohol y diez franceses dependientes de heroína, que estaban vivenciando el proceso del tratamiento de la desintoxicación de droga en centros especializados en Brasil o en Francia, fue hecha por la perspectiva de la psicopatología fenómeno-estructural que es un abordaje teórico de origen francesa que comprende el funcionamiento mental del individuo por su modo de vivir el tiempo y el espacio. La relación espacio-temporal y la personalidad fueron comprendidas por el modo como las personas se expresaron verbalmente y como vieron y construyeron los imágenes de las t

  18. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    Science.gov (United States)

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Translation of incremental talk test responses to steady-state exercise training intensity.

    Science.gov (United States)

    Lyon, Ellen; Menke, Miranda; Foster, Carl; Porcari, John P; Gibson, Mark; Bubbers, Terresa

    2014-01-01

    The Talk Test (TT) is a submaximal, incremental exercise test that has been shown to be useful in prescribing exercise training intensity. It is based on a subject's ability to speak comfortably during exercise. This study defined the amount of reduction in absolute workload intensity from an incremental exercise test using the TT to give appropriate absolute training intensity for cardiac rehabilitation patients. Patients in an outpatient rehabilitation program (N = 30) performed an incremental exercise test with the TT given every 2-minute stage. Patients rated their speech comfort after reciting a standardized paragraph. Anything other than a "yes" response was considered the "equivocal" stage, while all preceding stages were "positive" stages. The last stage with the unequivocally positive ability to speak was the Last Positive (LP), and the preceding stages were (LP-1 and LP-2). Subsequently, three 20-minute steady-state training bouts were performed in random order at the absolute workload at the LP, LP-1, and LP-2 stages of the incremental test. Speech comfort, heart rate (HR), and rating of perceived exertion (RPE) were recorded every 5 minutes. The 20-minute exercise training bout was completed fully by LP (n = 19), LP-1 (n = 28), and LP-2 (n = 30). Heart rate, RPE, and speech comfort were similar through the LP-1 and LP-2 tests, but the LP stage was markedly more difficult. Steady-state exercise training intensity was easily and appropriately prescribed at intensity associated with the LP-1 and LP-2 stages of the TT. The LP stage may be too difficult for patients in a cardiac rehabilitation program.

  20. Integrated Personnel and Pay System-Army Increment 2 (IPPS-A Inc 2)

    Science.gov (United States)

    2016-03-01

    system maintenance, and minimizing pay discrepancies . IPPS-A Inc 2 will account for status changes between Active, Reserve, and National Guard components...2016 Major Automated Information System Annual Report Integrated Personnel and Pay System-Army Increment 2 (IPPS-A Inc 2) Defense Acquisition...703-325-3747 DSN Phone: 865-2915 DSN Fax: 221-3747 Date Assigned: May 2, 2014 Program Information Program Name Integrated Personnel and Pay System

  1. How to set the stage for a full-fledged clinical trial testing 'incremental haemodialysis'.

    Science.gov (United States)

    Casino, Francesco Gaetano; Basile, Carlo

    2017-07-21

    Most people who make the transition to maintenance haemodialysis (HD) therapy are treated with a fixed dose of thrice-weekly HD (3HD/week) regimen without consideration of their residual kidney function (RKF). The RKF provides an effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life. Its preservation is instrumental to the prescription of incremental (1HD/week to 2HD/week) HD. The recently heightened interest in incremental HD has been hindered by the current limitations of the urea kinetic model (UKM), which tend to overestimate the needed dialysis dose in the presence of a substantial RKF. A recent paper by Casino and Basile suggested a variable target model (VTM), which gives more clinical weight to the RKF and allows less frequent HD treatments at lower RKF as opposed to the fixed target model, based on the wrong concept of the clinical equivalence between renal and dialysis clearance. A randomized controlled trial (RCT) enrolling incident patients and comparing incremental HD (prescribed according to the VTM) with the standard 3HD/week schedule and focused on hard outcomes, such as survival and health-related quality of life of patients, is urgently needed. The first step in designing such a study is to compute the 'adequacy lines' and the associated fitting equations necessary for the most appropriate allocation of the patients in the two arms and their correct and safe follow-up. In conclusion, the potentially important clinical and financial implications of the incremental HD render it highly promising and warrant RCTs. The UKM is the keystone for conducting such studies. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  2. Defense Enterprise Accounting and Management System-Increment 1 (DEAMS Inc 1)

    Science.gov (United States)

    2016-03-01

    information accurately and in conformance with Generally Accepted Accounting Principles , to comply with Congressional requirements of the Chief Financial ...2016 Major Automated Information System Annual Report Defense Enterprise Accounting and Management System-Increment 1 (DEAMS Inc 1) Defense...Phone: 937-257-2714 Fax: DSN Phone: 787-2714 DSN Fax: Date Assigned: August 17, 2015 Program Information Program Name Defense Enterprise Accounting

  3. Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change

    OpenAIRE

    Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem

    2003-01-01

    Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase.

  4. A Decision Support System to Choose Optimal Release Cycle Length in Incremental Software Development Environments

    OpenAIRE

    Suman, Avnish Chandra; Mishra, Saraswati; Anand, Abhinav

    2016-01-01

    In the last few years it has been seen that many software vendors have started delivering projects incrementally with very short release cycles. Best examples of success of this approach has been Ubuntu Operating system that has a 6 months release cycle and popular web browsers such as Google Chrome, Opera, Mozilla Firefox. However there is very little knowledge available to the project managers to validate the chosen release cycle length. We propose a decision support system that...

  5. Detection of milled 100Cr6 steel surface by eddy current and incremental permeance methods

    Czech Academy of Sciences Publication Activity Database

    Perevertov, Oleksiy; Neslušan, M.; Stupakov, Alexandr

    2017-01-01

    Roč. 87, Apr (2017), s. 15-23 ISSN 0963-8695 R&D Projects: GA ČR GB14-36566G; GA ČR GA13-18993S Institutional support: RVO:68378271 Keywords : Eddy currents * hard milling * incremental permeance * magnetic materials * surface characterization Subject RIV: JB - Sensors, Measurment, Regulation OBOR OECD: Electrical and electronic engineering Impact factor: 2.726, year: 2016

  6. Technological aspects regarding machining the titanium alloys by means of incremental forming

    Directory of Open Access Journals (Sweden)

    Bologa Octavian

    2017-01-01

    Full Text Available Titanium alloys are materials with reduced formability, due to their low plasticity. However, today there are high demands regarding their use in the automotive industry and in bio-medical industry, for prosthetic devices. This paper presents some technological aspects regarding the machinability of titanium alloys by means of incremental forming. The research presented in this paper aimed to demonstrate that the parts made from these materials could be machined at room temperature, in certain technological conditions.

  7. Diagnostic of flow rate of the tumors of the boobs at increment of the blood pressure

    International Nuclear Information System (INIS)

    Pohlodek, K.; Sohn, Ch.

    1998-01-01

    54 patients with ultrasonography evident tumors of the mammary glands were examined by angiography on flow rate of the blood in the tumor (14 patients with benign tumor and 40 patients with carcinoma at increment of the blood pressure. At evaluating of the findings 4 characteristic curves were obtained: first type was typical for malignant tumors; second type was characteristic for benign findings and third and fourth types were non-specific. (authors)

  8. Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change

    Science.gov (United States)

    Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem

    2003-01-01

    Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase. PMID:14728379

  9. Cross-correlation of instantaneous phase increments in pressure-flow fluctuations: Applications to cerebral autoregulation

    Science.gov (United States)

    Chen, Zhi; Hu, Kun; Stanley, H. Eugene; Novak, Vera; Ivanov, Plamen Ch.

    2006-03-01

    We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in

  10. StreamAR: incremental and active learning with evolving sensory data for activity recognition

    OpenAIRE

    Abdallah, Z.; Gaber, M.; Srinivasan, B.; Krishnaswamy, S.

    2012-01-01

    Activity recognition focuses on inferring current user activities by leveraging sensory data available on today’s sensor rich environment. Supervised learning has been applied pervasively for activity recognition. Typical activity recognition techniques process sensory data based on point-by-point approaches. In this paper, we propose a novel cluster-based classification for activity recognition Systems, termed StreamAR. The system incorporates incremental and active learning for mining user ...

  11. Development of the Nonstationary Incremental Analysis Update Algorithm for Sequential Data Assimilation System

    Directory of Open Access Journals (Sweden)

    Yoo-Geun Ham

    2016-01-01

    Full Text Available This study introduces a modified version of the incremental analysis updates (IAU, called the nonstationary IAU (NIAU method, to improve the assimilation accuracy of the IAU while keeping the continuity of the analysis. Similar to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. However, unlike the IAU, the NIAU procedure uses time-evolved forcing using the forward operator as corrections to the model. The solution of the NIAU is superior to that of the forward IAU, of which analysis is performed at the beginning of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.

  12. Two-Point Incremental Forming with Partial Die: Theory and Experimentation

    Science.gov (United States)

    Silva, M. B.; Martins, P. A. F.

    2013-04-01

    This paper proposes a new level of understanding of two-point incremental forming (TPIF) with partial die by means of a combined theoretical and experimental investigation. The theoretical developments include an innovative extension of the analytical model for rotational symmetric single point incremental forming (SPIF), originally developed by the authors, to address the influence of the major operating parameters of TPIF and to successfully explain the differences in formability between SPIF and TPIF. The experimental work comprised the mechanical characterization of the material and the determination of its formability limits at necking and fracture by means of circle grid analysis and benchmark incremental sheet forming tests. Results show the adequacy of the proposed analytical model to handle the deformation mechanics of SPIF and TPIF with partial die and demonstrate that neck formation is suppressed in TPIF, so that traditional forming limit curves are inapplicable to describe failure and must be replaced by fracture forming limits derived from ductile damage mechanics. The overall geometric accuracy of sheet metal parts produced by TPIF with partial die is found to be better than that of parts fabricated by SPIF due to smaller elastic recovery upon unloading.

  13. An application of the J-integral to an incremental analysis of blunting crack behavior

    International Nuclear Information System (INIS)

    Merkle, J.G.

    1989-01-01

    This paper describes an analytical approach to estimating the elastic-plastic stresses and strains near the tip of a blunting crack with a finite root radius. Rice's original derivation of the path independent J-integral considered the possibility of a finite crack tip root radius. For this problem Creager's elastic analysis gives the relation between the stress intensity factor K I and the near tip stresses. It can be shown that the relation K I 2 = E'J holds when the root radius is finite. Recognizing that elastic-plastic behavior is incrementally linear then allows a derivation to be performed for a bielastic specimen having a crack tip region of reduced modulus, and the result differentiated to estimate elastic-plastic behavior. The result is the incremental form of Neuber's equation. This result does not require the assumption of any particular stress-strain relation. However by assuming a pure power law stress-strain relation and using Ilyushin's principle, the ordinary deformation theory form of Neuber's equation, K σ K var epsilon = K t 2 , is obtained. Applications of the incremental form of Neuber's equation have already been made to fatigue and fracture analysis. This paper helps to provide a theoretical basis for these methods previously considered semiempirical. 26 refs., 4 figs

  14. Performance and delay analysis of hybrid ARQ with incremental redundancy over double rayleigh fading channels

    KAUST Repository

    Chelli, Ali

    2014-11-01

    In this paper, we study the performance of hybrid automatic repeat request (HARQ) with incremental redundancy over double Rayleigh channels, a common model for the fading amplitude of vehicle-to-vehicle communication systems. We investigate the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\\\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum number of HARQ rounds. Moreover, we evaluate the delay experienced by Poisson arriving packets for HARQ with incremental redundancy. We provide analytical expressions for the expected waiting time, the packet\\'s sojourn time in the queue, the average consumed power, and the energy efficiency. In our study, the communication rate per HARQ round is adjusted to the average signal-to-noise ratio (SNR) such that a target outage probability is not exceeded. This setting conforms with communication systems in which a quality of service is expected regardless of the channel conditions. Our analysis underscores the importance of HARQ in improving the spectral efficiency and reliability of communication systems. We demonstrate as well that the explored HARQ scheme achieves full diversity. Additionally, we investigate the tradeoff between energy efficiency and spectral efficiency.

  15. The use of single point incremental forming for customized implants of unicondylar knee arthroplasty: a review

    Directory of Open Access Journals (Sweden)

    Pankaj Kailasrao Bhoyar

    Full Text Available Abstract Introduction The implantable devices are having enormous market. These products are basically made by traditional manufacturing process, but for the custom-made implants Incremental Sheet Forming is a paramount alternative. Single Point Incremental Forming (SPIF is a manufacturing process to form intricate, asymmetrical components. It forms the component using stretching and bending by maintaining materials crystal structure. SPIF process can be performed using conventional Computer Numerical Control (CNC milling machine. Review This review paper elaborates the various manufacturing processes carried on various biocompatible metallic and nonmetallic customised implantable devices. Conclusion Ti-6Al-4V alloy is broadly used for biomedical implants, but in this alloy, Vanadium is toxic so this alloy is not compatible for implants. The attention of researchers is towards the non toxic and suitable biocompatible materials. For this reason, a novel approach was developed in order to enhance the mechanical properties of this material. . The development of incremental forming technique can improve the formability of existing alloys and may meet the current strict requirements for performance of dies and punches.

  16. Adults with initial metabolic syndrome have altered muscle deoxygenation during incremental exercise.

    Science.gov (United States)

    Machado, Alessandro da Costa; Barbosa, Thales Coelho; Kluser Sales, Allan Robson; de Souza, Marcio Nogueira; da Nóbrega, Antonio Claudio Lucas; Silva, Bruno Moreira

    2017-02-01

    Reduced aerobic power is independently associated with metabolic syndrome (MetS) incidence and prevalence in adults. This study investigated whether muscle deoxygenation (proxy of microvascular O 2 extraction) during incremental exercise is altered in MetS and associated with reduced oxygen consumption ( V˙O 2peak ). Twelve men with initial MetS (no overt diseases and medication-naive; mean ± SD, age 38 ± 7 years) and 12 healthy controls (HCs) (34 ± 7 years) completed an incremental cycling test to exhaustion, in which pulmonary ventilation and gas exchange (metabolic analyzer), as well as vastus lateralis deoxygenation (near infrared spectroscopy), were measured. Subjects with MetS, in contrast to HCs, showed lower V˙O 2peak normalized to total lean mass, similar V˙O 2 response to exercise, and earlier break point (BP) in muscle deoxygenation. Consequently, deoxygenation slope from BP to peak exercise was greater. Furthermore, absolute V˙O 2peak was positively associated with BP in correlations adjusted for total lean mass. MetS, without overt diseases, altered kinetics of muscle deoxygenation during incremental exercise, particularly at high-intensity exercise. Therefore, the balance between utilization and delivery of O 2 within skeletal muscle is impaired early in MetS natural history, which may contribute to the reduction in aerobic power. © 2017 The Obesity Society.

  17. Pornographic image recognition and filtering using incremental learning in compressed domain

    Science.gov (United States)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  18. First UHF Implementation of the Incremental Scheme for Open-Shell Systems.

    Science.gov (United States)

    Anacker, Tony; Tew, David P; Friedrich, Joachim

    2016-01-12

    The incremental scheme makes it possible to compute CCSD(T) correlation energies to high accuracy for large systems. We present the first extension of this fully automated black-box approach to open-shell systems using an Unrestricted Hartree-Fock (UHF) wave function, extending the efficient domain-specific basis set approach to handle open-shell references. We test our approach on a set of organic and metal organic structures and molecular clusters and demonstrate standard deviations from canonical CCSD(T) values of only 1.35 kJ/mol using a triple ζ basis set. We find that the incremental scheme is significantly more cost-effective than the canonical implementation even for relatively small systems and that the ease of parallelization makes it possible to perform high-level calculations on large systems in a few hours on inexpensive computers. We show that the approximations that make our approach widely applicable are significantly smaller than both the basis set incompleteness error and the intrinsic error of the CCSD(T) method, and we further demonstrate that incremental energies can be reliably used in extrapolation schemes to obtain near complete basis set limit CCSD(T) reaction energies for large systems.

  19. Cost of Incremental Expansion of an Existing Family Medicine Residency Program.

    Science.gov (United States)

    Ashkin, Evan A; Newton, Warren P; Toomey, Brian; Lingley, Ronald; Page, Cristen P

    2017-07-01

    Expanding residency training programs to address shortages in the primary care workforce is challenged by the present graduate medical education (GME) environment. The Medicare funding cap on new GME positions and reductions in the Health Resources and Services Administration (HRSA) Teaching Health Center (THC) GME program require innovative solutions to support primary care residency expansion. Sparse literature exists to assist in predicting the actual cost of incremental expansion of a family medicine residency program without federal or state GME support. In 2011 a collaboration to develop a community health center (CHC) academic medical partnership (CHAMP), was formed and created a THC as a training site for expansion of an existing family medicine residency program. The cost of expansion was a critical factor as no Federal GME funding or HRSA THC GME program support was available. Initial start-up costs were supported by a federal grant and local foundations. Careful financial analysis of the expansion has provided actual costs per resident of the incremental expansion of the residencyRESULTS: The CHAMP created a new THC and expanded the residency from eight to ten residents per year. The cost of expansion was approximately $72,000 per resident per year. The cost of incremental expansion of our residency program in the CHAMP model was more than 50% less than that of the recently reported cost of training in the HRSA THC GME program.

  20. Optimization of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using Taguchi

    Science.gov (United States)

    Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir

    2018-03-01

    Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.

  1. Adaptive scallop height tool path generation for robot-based incremental sheet metal forming

    Science.gov (United States)

    Seim, Patrick; Möllensiep, Dennis; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2016-10-01

    Incremental sheet metal forming is an emerging process for the production of individualized products or prototypes in low batch sizes and with short times to market. In these processes, the desired shape is produced by the incremental inward motion of the workpiece-independent forming tool in depth direction and its movement along the contour in lateral direction. Based on this shape production, the tool path generation is a key factor on e.g. the resulting geometric accuracy, the resulting surface quality, and the working time. This paper presents an innovative tool path generation based on a commercial milling CAM package considering the surface quality and working time. This approach offers the ability to define a specific scallop height as an indicator of the surface quality for specific faces of a component. Moreover, it decreases the required working time for the production of the entire component compared to the use of a commercial software package without this adaptive approach. Different forming experiments have been performed to verify the newly developed tool path generation. Mainly, this approach serves to solve the existing conflict of combining the working time and the surface quality within the process of incremental sheet metal forming.

  2. Incremental View Maintenance for Deductive Graph Databases Using Generalized Discrimination Networks

    Directory of Open Access Journals (Sweden)

    Thomas Beyhl

    2016-12-01

    Full Text Available Nowadays, graph databases are employed when relationships between entities are in the scope of database queries to avoid performance-critical join operations of relational databases. Graph queries are used to query and modify graphs stored in graph databases. Graph queries employ graph pattern matching that is NP-complete for subgraph isomorphism. Graph database views can be employed that keep ready answers in terms of precalculated graph pattern matches for often stated and complex graph queries to increase query performance. However, such graph database views must be kept consistent with the graphs stored in the graph database. In this paper, we describe how to use incremental graph pattern matching as technique for maintaining graph database views. We present an incremental maintenance algorithm for graph database views, which works for imperatively and declaratively specified graph queries. The evaluation shows that our maintenance algorithm scales when the number of nodes and edges stored in the graph database increases. Furthermore, our evaluation shows that our approach can outperform existing approaches for the incremental maintenance of graph query results.

  3. A numerical analysis on forming limits during spiral and concentric single point incremental forming

    Science.gov (United States)

    Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.

    2017-01-01

    Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.

  4. Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hina Nasir

    2016-07-01

    Full Text Available This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs; performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE efficient depth based routing and Enhanced-ACE (E-ACE are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ. E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment.

  5. Performance Analysis of Selective Decode-and-Forward Multinode Incremental Relaying with Maximal Ratio Combining

    KAUST Repository

    Hadjtaieb, Amir

    2013-09-12

    In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.

  6. Stable isotope time series and dentin increments elucidate Pleistocene proboscidean paleobiology

    Science.gov (United States)

    Fisher, Daniel; Rountrey, Adam; Smith, Kathlyn; Fox, David

    2010-05-01

    Investigations of stable isotope composition of mineralized tissues have added greatly to our knowledge of past climates and dietary behaviors of organisms, even when they are implemented through 'bulk sampling', in which a single assay yields a single, time-averaged value. Likewise, the practice of 'sclerochronology', which documents periodic structural increments comprising a growth record for accretionary tissues, offers insights into rates of growth and age data at a scale of temporal resolution permitted by the nature of structural increments. We combine both of these approaches to analyze dental tissues of late Pleistocene proboscideans. Tusk dentin typically preserves a record of accretionary growth consisting of histologically distinct increments on daily, approximately weekly, and yearly time scales. Working on polished transverse or longitudinal sections, we mill out a succession of temporally controlled dentin samples bounded by clear structural increments with a known position in the sequence of tusk growth. We further subject each sample (or an aliquot thereof) to multiple compositional analyses - most frequently to assess δ18O and δ13C of hydroxyapatite carbonate, and δ13C and δ15N of collagen. This yields, for each animal and each series of years investigated, a set of parallel compositional time series with a temporal resolution of 1-2 months (or finer if we need additional precision). Patterns in variation of thickness of periodic sub-annual increments yield insight into intra-annual and inter-annual variation of tusk growth rate. This is informative even by itself, but it is still more valuable when coupled with compositional time series. Further, the controls on different stable isotope systems are sufficiently different that the data ensemble yields 'much more than the sum of its parts.' By assessing how compositions and growth rates covary, we monitor with greater confidence changes in local climate, diet, behavior, and health status. We

  7. Influence of the selection from incremental stages on lactate minimum intensity: a pilot study

    Directory of Open Access Journals (Sweden)

    Willian Eiji Miyagi

    2013-09-01

    Full Text Available The purposes of this study were to assess the influence of stage selection from the incremental phase and the use of peak lactate after hyperlactatemia induction on the determination of the lactate minimum intensity (iLACmin. Twelve moderately active university students (23±5 years, 78.3±14.1 kg, 175.3±5.1 cm performed a maximal incremental test to determine the respiratory compensation point (RCP (initial intensity at 70 W and increments of 17.5 W every 2 minutes and a lactate minimum test (induction with the Wingate test, the incremental test started at 30 W below RCP with increments of 10 W every 3 minutes on a cycle ergometer. The iLACmin was determined using second order polynomial adjustment applying five exercise stage selection: 1 using all stages (iLACmin P; 2 using all stages below and two stages above iLACminP (iLACminA; 3 using two stages below and all stages above iLACminP (iLACminB; 4 using the largest and same possible number of stages below and above the iLACminP (iLACminI; 5 using all stages and peak lactate after hyperlactatemia induction (iLACminD. No differences were found between the iLACminP (138.2±30.2 W, iLACminA (139.1±29.1 W, iLACminB (135.3±14.2 W, iLACminI (138.6±20.5 W and iLACmiD (136.7±28.5 W protocols, and a high level of agreement between these intensities and iLACminP was observed. Oxygen uptake, heart rate, rating of perceived exertion and lactate corresponding to these intensities was not different and was strongly correlated. However, the iLACminB presented the lowest success rate (66.7%. In conclusion, stage selection did not influence the determination of iLACmin but modified the success rate

  8. Quasi-static incremental behavior of granular materials: Elastic-plastic coupling and micro-scale dissipation

    Science.gov (United States)

    Kuhn, Matthew R.; Daouadji, Ali

    2018-05-01

    The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.

  9. Methods of determining incremental energy costs for economic dispatch and inter-utility interchange in Canadian utilities

    International Nuclear Information System (INIS)

    El-Hawary, M.E.; El-Hawary, F.; Mbamalu, G.A.N.

    1991-01-01

    A questionnaire was mailed to ten Canadian utilities to determine the methods the utilities use in determining the incremental cost of delivering energy at any time. The questionnaire was divided into three parts: generation, transmission and general. The generation section dealt with heat rates, fuel, operation and maintenance, startup and shutdown, and method of prioritizing and economic evaluation of interchange transactions. Transmission dealt with inclusion of transmission system incremental maintenance costs, and transmission losses determination. The general section dealt with incremental costs aspects, and various other economic considerations. A summary is presented of responses to the questionnaire

  10. PCA/INCREMENT MEMORY interface for analog processors on-line with PC-XT/AT IBM

    International Nuclear Information System (INIS)

    Biri, S.; Buttsev, V.S.; Molnar, J.; Samojlov, V.N.

    1989-01-01

    The functional and operational descriptions on PCA/INCREMENT MEMORY interface are discussed. The following is solved with this unit: connection between the analogue signal processor and PC, nuclear spectrum acquisition up to 2 24 -1 counts/channel using increment or decrement method, data read/write from or to memory via data bus PC during the spectrum acquisition. Dual ported memory organization is 4096x24 bit, increment cycle time at 4.77 MHz system clock frequency is 1.05 μs. 6 refs.; 2 figs

  11. Data compression and genomes: a two-dimensional life domain map.

    Science.gov (United States)

    Menconi, Giulia; Benci, Vieri; Buiatti, Marcello

    2008-07-21

    We define the complexity of DNA sequences as the information content per nucleotide, calculated by means of some Lempel-Ziv data compression algorithm. It is possible to use the statistics of the complexity values of the functional regions of different complete genomes to distinguish among genomes of different domains of life (Archaea, Bacteria and Eukarya). We shall focus on the distribution function of the complexity of non-coding regions. We show that the three domains may be plotted in separate regions within the two-dimensional space where the axes are the skewness coefficient and the curtosis coefficient of the aforementioned distribution. Preliminary results on 15 genomes are introduced.

  12. Toward translational incremental similarity-based reasoning in breast cancer grading

    Science.gov (United States)

    Tutac, Adina E.; Racoceanu, Daniel; Leow, Wee-Keng; Müller, Henning; Putti, Thomas; Cretu, Vladimir

    2009-02-01

    One of the fundamental issues in bridging the gap between the proliferation of Content-Based Image Retrieval (CBIR) systems in the scientific literature and the deficiency of their usage in medical community is based on the characteristic of CBIR to access information by images or/and text only. Yet, the way physicians are reasoning about patients leads intuitively to a case representation. Hence, a proper solution to overcome this gap is to consider a CBIR approach inspired by Case-Based Reasoning (CBR), which naturally introduces medical knowledge structured by cases. Moreover, in a CBR system, the knowledge is incrementally added and learned. The purpose of this study is to initiate a translational solution from CBIR algorithms to clinical practice, using a CBIR/CBR hybrid approach. Therefore, we advance the idea of a translational incremental similarity-based reasoning (TISBR), using combined CBIR and CBR characteristics: incremental learning of medical knowledge, medical case-based structure of the knowledge (CBR), image usage to retrieve similar cases (CBIR), similarity concept (central for both paradigms). For this purpose, three major axes are explored: the indexing, the cases retrieval and the search refinement, applied to Breast Cancer Grading (BCG), a powerful breast cancer prognosis exam. The effectiveness of this strategy is currently evaluated over cases provided by the Pathology Department of Singapore National University Hospital, for the indexing. With its current accuracy, TISBR launches interesting perspectives for complex reasoning in future medical research, opening the way to a better knowledge traceability and a better acceptance rate of computer-aided diagnosis assistance among practitioners.

  13. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    Science.gov (United States)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  14. Partial and incremental PCMH practice transformation: implications for quality and costs.

    Science.gov (United States)

    Paustian, Michael L; Alexander, Jeffrey A; El Reda, Darline K; Wise, Chris G; Green, Lee A; Fetters, Michael D

    2014-02-01

    To examine the associations between partial and incremental implementation of the Patient Centered Medical Home (PCMH) model and measures of cost and quality of care. We combined validated, self-reported PCMH capabilities data with administrative claims data for a diverse statewide population of 2,432 primary care practices in Michigan. These data were supplemented with contextual data from the Area Resource File. We measured medical home capabilities in place as of June 2009 and change in medical home capabilities implemented between July 2009 and June 2010. Generalized estimating equations were used to estimate the mean effect of these PCMH measures on total medical costs and quality of care delivered in physician practices between July 2009 and June 2010, while controlling for potential practice, patient cohort, physician organization, and practice environment confounders. Based on the observed relationships for partial implementation, full implementation of the PCMH model is associated with a 3.5 percent higher quality composite score, a 5.1 percent higher preventive composite score, and $26.37 lower per member per month medical costs for adults. Full PCMH implementation is also associated with a 12.2 percent higher preventive composite score, but no reductions in costs for pediatric populations. Incremental improvements in PCMH model implementation yielded similar positive effects on quality of care for both adult and pediatric populations but were not associated with cost savings for either population. Estimated effects of the PCMH model on quality and cost of care appear to improve with the degree of PCMH implementation achieved and with incremental improvements in implementation. © Health Research and Educational Trust.

  15. Increments and duplication events of enzymes and transcription factors influence metabolic and regulatory diversity in prokaryotes.

    Directory of Open Access Journals (Sweden)

    Mario Alberto Martínez-Núñez

    Full Text Available In this work, the content of enzymes and DNA-binding transcription factors (TFs in 794 non-redundant prokaryotic genomes was evaluated. The identification of enzymes was based on annotations deposited in the KEGG database as well as in databases of functional domains (COG and PFAM and structural domains (Superfamily. For identifications of the TFs, hidden Markov profiles were constructed based on well-known transcriptional regulatory families. From these analyses, we obtained diverse and interesting results, such as the negative rate of incremental changes in the number of detected enzymes with respect to the genome size. On the contrary, for TFs the rate incremented as the complexity of genome increased. This inverse related performance shapes the diversity of metabolic and regulatory networks and impacts the availability of enzymes and TFs. Furthermore, the intersection of the derivatives between enzymes and TFs was identified at 9,659 genes, after this point, the regulatory complexity grows faster than metabolic complexity. In addition, TFs have a low number of duplications, in contrast to the apparent high number of duplications associated with enzymes. Despite the greater number of duplicated enzymes versus TFs, the increment by which duplicates appear is higher in TFs. A lower proportion of enzymes among archaeal genomes (22% than in the bacterial ones (27% was also found. This low proportion might be compensated by the interconnection between the metabolic pathways in Archaea. A similar proportion was also found for the archaeal TFs, for which the formation of regulatory complexes has been proposed. Finally, an enrichment of multifunctional enzymes in Bacteria, as a mechanism of ecological adaptation, was detected.

  16. Forecasting the summer rainfall in North China using the year-to-year increment approach

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    A new approach to forecasting the year-to-year increment of rainfall in North China in July-August (JA) is proposed. DY is defined as the difference of a variable between the current year and the preceding year (year-to-year increment). NR denotes the seasonal mean precipitation rate over North China in JA. After analyzing the atmospheric circulation anomalies associated with the DY of NR, five key predictors for the DY of NR have been identified. The prediction model for the DY of NR is established by using multi-linear regression method and the NR is obtained (the current forecasted DY of NR added to the preceding observed NR). The prediction model shows a high correlation coefficient (0.8) between the simulated and the observed DY of NR throughout period 1965-1999, with an average relative root mean square error of 19% for the percentage of precipitation rate anomaly over North China. The prediction model makes a hindcast for 2000-2007, with an average relative root mean square error of 21% for the percentage of precipitation rate anomaly over North China. The model reproduces the downward trend of the percentage of precipitation rate anomaly over North China during 1965-2006. Because the current operational prediction models of the summer precipitation have average forecast scores of 60%-70%, it has been more difficult to forecast the summer rainfall over North China. Thus this new approach for predicting the year-to-year increment of the summer precipitation (and hence the summer precipitation itself) has the potential to significantly improve operational forecasting skill for summer precipitation.

  17. The Effects of Cells Temperature Increment and Variations of Irradiation for Monocrystalline Photovoltaic

    Science.gov (United States)

    Fuad Rahman Soeharto, Faishal; Hermawan

    2017-04-01

    Photovoltaic cell technology has been developed to meet the target of 17% Renewable Energy in 2025 accordance with Indonesia Government Regulation No. 5 2006. Photovoltaic cells are made of semiconductor materials, namely silicon or germanium (p-n junction). These cells need the light that comes from solar irradiation which brings energy photons to convert light energy into electrical energy. It is different from the solar heater that requires heat energy or thermal of sunlight that is normally used for drying or heating water. Photovoltaic cells requires energy photons to perform the energy conversion process, the photon energy can be derived from sunlight. Energy photon is taken from the sun light along with the advent of heat due to black-body radiation, which can lead to temperature increments of photovoltaic cells. Increment of 1°C can decreased photovoltaic cell voltage of up to 2.3 mV per cell. In this research, it will be discuss the analysis of the effect of rising temperatures and variations of irradiation on the type monocrystalline photovoltaic. Those variation are analyzed, simulated and experiment by using a module of experiment. The test results show that increment temperature from 25° C to 80° C at cell of photovoltaic decrease the output voltage of the photovoltaic cell at 4.21 V, and it also affect the power output of the cell which decreases up to 0.7523 Watt. In addition, the bigger the value of irradiation received by cell at amount of 1000 W / m2, produce more output power cells at the same temperature.

  18. Incremental health care utilization and costs for acute otitis media in children.

    Science.gov (United States)

    Ahmed, Sameer; Shapiro, Nina L; Bhattacharyya, Neil

    2014-01-01

    Determine the incremental health care costs associated with the diagnosis and treatment of acute otitis media (AOM) in children. Cross-sectional analysis of a national health-care cost database. Pediatric patients (age children with and without a diagnosis of AOM, adjusting for age, sex, region, race, ethnicity, insurance coverage, and Charlson comorbidity Index. A total of 8.7 ± 0.4 million children were diagnosed with AOM (10.7 ± 0.4% annually, mean age 5.3 years, 51.3% male) among 81.5 ± 2.3 million children sampled (mean age 8.9 years, 51.3% male). Children with AOM manifested an additional +2.0 office visits, +0.2 emergency department visits, and +1.6 prescription fills (all P <0.001) per year versus those without AOM, adjusting for demographics and medical comorbidities. Similarly, AOM was associated with an incremental increase in outpatient health care costs of $314 per child annually (P <0.001) and an increase of $17 in patient medication costs (P <0.001), but was not associated with an increase in total prescription expenses ($13, P = 0.766). The diagnosis of AOM confers a significant incremental health-care utilization burden on both patients and the health care system. With its high prevalence across the United States, pediatric AOM accounts for approximately $2.88 billion in added health care expense annually and is a significant health-care utilization concern. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  19. An electromyographic-based test for estimating neuromuscular fatigue during incremental treadmill running

    International Nuclear Information System (INIS)

    Camic, Clayton L; Kovacs, Attila J; Hill, Ethan C; Calantoni, Austin M; Yemm, Allison J; Enquist, Evan A; VanDusseldorp, Trisha A

    2014-01-01

    The purposes of the present study were two fold: (1) to determine if the model used for estimating the physical working capacity at the fatigue threshold (PWC FT ) from electromyographic (EMG) amplitude data during incremental cycle ergometry could be applied to treadmill running to derive a new neuromuscular fatigue threshold for running, and (2) to compare the running velocities associated with the PWC FT , ventilatory threshold (VT), and respiratory compensation point (RCP). Fifteen college-aged subjects (21.5  ±  1.3 y, 68.7  ±  10.5 kg, 175.9  ±  6.7 cm) performed an incremental treadmill test to exhaustion with bipolar surface EMG signals recorded from the vastus lateralis. There were significant (p < 0.05) mean differences in running velocities between the VT (11.3  ±  1.3 km h −1 ) and PWC FT (14.0  ±  2.3 km h −1 ), VT and RCP (14.0  ±  1.8 km h −1 ), but not the PWC FT and RCP. The findings of the present study indicated that the PWC FT model could be applied to a single continuous, incremental treadmill test to estimate the maximal running velocity that can be maintained prior to the onset of neuromuscular fatigue. In addition, these findings suggested that the PWC FT , like the RCP, may be used to differentiate the heavy from severe domains of exercise intensity. (paper)

  20. Blood flow patterns during incremental and steady-state aerobic exercise.

    Science.gov (United States)

    Coovert, Daniel; Evans, LeVisa D; Jarrett, Steven; Lima, Carla; Lima, Natalia; Gurovich, Alvaro N

    2017-05-30

    Endothelial shear stress (ESS) is a physiological stimulus for vascular homeostasis, highly dependent on blood flow patterns. Exercise-induced ESS might be beneficial on vascular health. However, it is unclear what type of ESS aerobic exercise (AX) produces. The aims of this study are to characterize exercise-induced blood flow patterns during incremental and steady-state AX. We expect blood flow pattern during exercise will be intensity-dependent and bidirectional. Six college-aged students (2 males and 4 females) were recruited to perform 2 exercise tests on cycleergometer. First, an 8-12-min incremental test (Test 1) where oxygen uptake (VO2), heart rate (HR), blood pressure (BP), and blood lactate (La) were measured at rest and after each 2-min step. Then, at least 48-hr. after the first test, a 3-step steady state exercise test (Test 2) was performed measuring VO2, HR, BP, and La. The three steps were performed at the following exercise intensities according to La: 0-2 mmol/L, 2-4 mmol/L, and 4-6 mmol/L. During both tests, blood flow patterns were determined by high-definition ultrasound and Doppler on the brachial artery. These measurements allowed to determine blood flow velocities and directions during exercise. On Test 1 VO2, HR, BP, La, and antegrade blood flow velocity significantly increased in an intensity-dependent manner (repeated measures ANOVA, pflow velocity did not significantly change during Test 1. On Test 2 all the previous variables significantly increased in an intensity-dependent manner (repeated measures ANOVA, pflow patterns during incremental and steady-state exercises include both antegrade and retrograde blood flows.

  1. Experimental measurement of enthalpy increments of Th0.25Ce0.75O2

    International Nuclear Information System (INIS)

    Babu, R.; Balakrishnan, S.; Ananthasivan, K.; Nagarajan, K.

    2013-01-01

    Thorium has been suggested as an alternative fertile material for a nuclear fuel cycle, and an inert matrix for burning plutonium and for waste disposal. The third stage of India's nuclear power programme envisages utilization of thorium and plutonium as a fuel in Advanced Heavy Water Reactor (AHWR) and Accelerator Driven Sub-critical Systems (ADSS). Solid solutions of ThO 2 -PuO 2 are of importance because of coexistence of Th with Pu during the breeding cycle. CeO 2 is used as a PuO 2 analog due to similar ionic radii of cations and similar physico-chemical properties of the oxides. ThO 2 forms a homogeneous solid solution with the cubic fluorite structure when doped with Ce in the entire compositional range. In the development of mixed oxide nuclear fuels, knowledge of thermodynamic properties of thorium oxide and its mixtures has become extremely importance for understanding the fuel behavior during irradiation and for predicting the performance of the fuel under accidental conditions. Thermodynamic functions such as the enthalpy increment and heat capacity of the theria-ceria solid solution have not been measured experimentally. Hence, the enthalpy increments of thoria-ceria solid solutions, Th 0.25 Ce 0.75 O 2 by inverse drop calorimetry in the temperature range 523-1723 K have been measured. The measured enthalpy increments were fitted in to polynomial functions by using the least squares method and the other thermodynamic functions such as heat capacity, entropy and Gibbs energy functions were computed in the temperature range 298-1800 K. The reported thermodynamic functions for Th 0.25 Ce 0.75 O 2 forms the first experimental data and the heat capacity of (Th,Ce)O 2 solid solutions was shown to obey the Neumann-Kopp's rule. (author)

  2. Enhancing pattern of gastric carcinoma at dynamic incremental CT: correlation with gross and histologic findings

    International Nuclear Information System (INIS)

    Shin, Hong Seop; Lee, Dong Ho; Kim, Yoon Hwa; Ko, Young Tae; Lim, Joo Won; Yoon, Yup

    1996-01-01

    To evaluate the enhancing pattern of gastric carcinomas at dynamic incremental CT and to correlate it with pathologic findings. We retrospectively evaluated the enhancement pattern of stomach cancer on dynamic incremental CT of the 78 patients. All the lesions had been pathologically proved after surgery. The enhancement pattern was categorized as good or poor in the early phase;homogeneous, heterogeneous or ring enhancement;the presence or absence of delayed enhancement. There were 16 cases of early gastric cancer (EGC), and 62 cases of advanced gastric cancer(AGC). The Borrmann type of AGC were 1(n=1), 2(n=20), 3=(n=32), 4(n=8) and 5(n=1). The histologic patterns of AGC were tubular(n=49), signet ring cell(n=10), and mucinous(n=3). The enhancing patterns were compared with gross and histologic findings and delayed enhancement was correlated with pathologic evidence of desmoplasia. Good enhancement of tumor was seen in 24/41cases (58.5%) with AGC Borrmann type 3-5, in 6/21(28.6%) with AGC Borrmann type 1-2, and in 3/16(18.8%) with EGC (P<.05). By histologic pattern, good enhancement of tumor was seen in 8/10(80%) with signet ring cell type, in 21/49(42.9%) with tubular type, and in 1/3(33.3%) with mucinous type(P<.05). EGC was homogeneously enhanced in 14/16cases (87.5%), but AGC was heterogeneously enhanced in 33/62(53.2%), respectively(P<.01). There was no significant correlation between delayed enhancement and the presence of desmoplasia. AGC Borrmann type 3-5 and signet ring cell type have a tendency to show good enhancement and EGC is more homogeneously enhanced at dynamic incremental CT

  3. Stable Myoelectric Control of a Hand Prosthesis using Non-Linear Incremental Learning

    Directory of Open Access Journals (Sweden)

    Arjan eGijsberts

    2014-02-01

    Full Text Available Stable myoelectric control of hand prostheses remains an open problem. The only successful human-machine interface is surface electromyography, typically allowing control of a few degrees of freedom. Machine learning techniques may have the potential to remove these limitations, but their performance is thus far inadequate: myoelectric signals change over time under the influence of various factors, deteriorating control performance. It is therefore necessary, in the standard approach, to regularly retrain a new model from scratch.We hereby propose a non-linear incremental learning method in which occasional updates with a modest amount of novel training data allow continual adaptation to the changes in the signals. In particular, Incremental Ridge Regression and an approximation of the Gaussian Kernel known as Random Fourier Features are combined to predict finger forces from myoelectric signals, both finger-by-finger and grouped in grasping patterns.We show that the approach is effective and practically applicable to this problem by first analyzing its performance while predicting single-finger forces. Surface electromyography and finger forces were collected from 10 intact subjects during four sessions spread over two different days; the results of the analysis show that small incremental updates are indeed effective to maintain a stable level of performance.Subsequently, we employed the same method on-line to teleoperate a humanoid robotic arm equipped with a state-of-the-art commercial prosthetic hand. The subject could reliably grasp, carry and release everyday-life objects, enforcing stable grasping irrespective of the signal changes, hand/arm movements and wrist pronation and supination.

  4. The effect of the model posture on the forming quality in the CNC incremental forming

    International Nuclear Information System (INIS)

    Zhu, H; Zhang, W; Bai, J L; Yu, C; Xing, Y F

    2015-01-01

    Sheet rupture caused by a sheet metal thickness non-uniformity persists in CNC (Computer Numerical Control) incremental forming. Because the forming half cone angle is determined by the orientation of the model to be formed, so is the sheet metal's uniformity. The finite element analysis models for the two kinds of the postures of the model were established, and the digital simulation was conducted by using the ANSYS/LA-DYNA software. The effect of the model's posture on the sheet thickness distribution and the sheet thickness thinning rate were studied by comparing the simulation results of two kinds of the finite elements analyzes. (paper)

  5. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-01

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  6. B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)

    Science.gov (United States)

    2015-12-01

    Confidence Level Confidence Level of cost estimate for current APB: 55% This APB reflects cost and funding data based on the B-2 EHF Increment I SCP...This cost estimate was quantified at the Mean (~55%) confidence level . Total Quantity Quantity SAR Baseline Production Estimate Current APB...Production Estimate Econ Qty Sch Eng Est Oth Spt Total 33.624 -0.350 1.381 0.375 0.000 -6.075 0.000 -0.620 -5.289 28.335 Current SAR Baseline to Current

  7. Statistical Discriminability Estimation for Pattern Classification Based on Neural Incremental Attribute Learning

    DEFF Research Database (Denmark)

    Wang, Ting; Guan, Sheng-Uei; Puthusserypady, Sadasivan

    2014-01-01

    Feature ordering is a significant data preprocessing method in Incremental Attribute Learning (IAL), a novel machine learning approach which gradually trains features according to a given order. Previous research has shown that, similar to feature selection, feature ordering is also important based...... estimation. Moreover, a criterion that summarizes all the produced values of AD is employed with a GA (Genetic Algorithm)-based approach to obtain the optimum feature ordering for classification problems based on neural networks by means of IAL. Compared with the feature ordering obtained by other approaches...

  8. Redesenhando o Mapa Eleitoral do Brasil: uma proposta de reforma política incremental

    Directory of Open Access Journals (Sweden)

    Octavio Amorim Neto

    2011-06-01

    Full Text Available Este artigo oferece uma proposta de reforma incremental do sistema eleitoral da Câmara dos Deputados. A natureza incremental da proposta se funda na suposição segundo a qual o sistema político é uma arquitetura complexa e delicada, sendo a possibilidade de piorá-lo com mudanças ambiciosas e intempestivas bem maior do que a de aperfeiçoá-lo. A proposta mantém o sistema de representação proporcional com lista aberta, mas altera duas variáveis-chave, uma vez que reduz a magnitude média das circunscrições eleitorais e estabelece uma regra proporcional de distribuição de cadeiras entre partidos coligados. A operacionalização da reforma leva a um novo desenho do mapa eleitoral do país, com circunscrições eleitorais menores dentro de 12 estados. O artigo também apresenta os resultados de um exercício de simulação feito com base nos dados das eleições de 2006 que recalcula a composição partidária da Câmara dos Deputados a partir das regras aqui preconizadas.This article presents a proposal for an incremental reform of the electoral system of the Chamber of Deputies. The incremental nature of the proposal rests on the assumption that this political system is a complex and delicate architecture, and therefore, abrupt and ambitious changes are more likely to make it worse than to improve it. The proposal advocated here maintains the current system of open-list proportional representation, but it changes two key variables of this electoral system, since it reduces its average district magnitude and establishes a proportional distribution of seats among parties that form electoral coalitions. The way the reform is operationalized leads to a redesign of the electoral map of the country, with smaller districts within 12 states. The article also presents the results of a simulation exercise based on data from the 2006 elections, whereby the party makeup of the Chamber of Deputies is recalculated according to the rules proposed

  9. Developing risk prediction models for kidney injury and assessing incremental value for novel biomarkers.

    Science.gov (United States)

    Kerr, Kathleen F; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R

    2014-08-07

    The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients' risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. Copyright © 2014 by the American Society of Nephrology.

  10. Incremental benefit of three-dimensional transesophageal echocardiography in the assessment of a primary pericardial hemangioma.

    Science.gov (United States)

    Arisha, Mohammed J; Hsiung, Ming C; Nanda, Navin C; ElKaryoni, Ahmed; Mohamed, Ahmed H; Wei, Jeng

    2017-08-01

    Hemangiomas are rarely found in the heart and pericardial involvement is even more rare. We report a case of primary pericardial hemangioma, in which three-dimensional transesophageal echocardiography (3DTEE) provided incremental benefit over standard two-dimensional images. Our case also highlights the importance of systematic cropping of the 3D datasets in making a diagnosis of pericardial hemangioma with a greater degree of certainty. In addition, we also provide a literature review of the features of cardiac/pericardial hemangiomas in a tabular form. © 2017, Wiley Periodicals, Inc.

  11. The Incremental Information Content of the Cash Flow Statement: An Australian Empirical Investigation

    OpenAIRE

    Hadri Kusuma

    2014-01-01

    The general objective of the present study is to investigate and assess the incremental information content of cash flow disclosures as required by the AASB 1026 ¡°Statement of Cash Flows¡±. This test addresses the issue of whether a change in cash flow components has the same relationship with security prices as that in earnings. Several previous studies indicate both income and cash flow statements may be mutually exclusive or mutually inclusive statements. The data to test three hypotheses...

  12. Is incremental hemodialysis ready to return on the scene? From empiricism to kinetic modelling.

    Science.gov (United States)

    Basile, Carlo; Casino, Francesco Gaetano; Kalantar-Zadeh, Kamyar

    2017-08-01

    Most people who make the transition to maintenance dialysis therapy are treated with a fixed dose thrice-weekly hemodialysis regimen without considering their residual kidney function (RKF). The RKF provides effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status, and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life, although these effects may be confounded by patient comorbidities. Preservation of the RKF requires a careful approach, including regular monitoring, avoidance of nephrotoxins, gentle control of blood pressure to avoid intradialytic hypotension, and an individualized dialysis prescription including the consideration of incremental hemodialysis. There is currently no standardized method for applying incremental hemodialysis in practice. Infrequent (once- to twice-weekly) hemodialysis regimens are often used arbitrarily, without knowing which patients would benefit the most from them or how to escalate the dialysis dose as RKF declines over time. The recently heightened interest in incremental hemodialysis has been hindered by the current limitations of the urea kinetic models (UKM) which tend to overestimate the dialysis dose required in the presence of substantial RKF. This is due to an erroneous extrapolation of the equivalence between renal urea clearance (Kru) and dialyser urea clearance (Kd), correctly assumed by the UKM, to the clinical domain. In this context, each ml/min of Kd clears the urea from the blood just as 1 ml/min of Kru does. By no means should such kinetic equivalence imply that 1 ml/min of Kd is clinically equivalent to 1 ml/min of urea clearance provided by the native kidneys. A recent paper by Casino and Basile suggested a variable target model (VTM) as opposed to the fixed model, because the VTM gives more clinical weight to the RKF and allows

  13. Determination of the carbon market incremental payoff considering a stochastic jump-diffusion process

    Directory of Open Access Journals (Sweden)

    Fabio Rodrigo Siqueira Batista

    2013-12-01

    Full Text Available The objective of this paper is to verify the robustness of the Least Square Monte Carlo and Grant, Vora & Weeks methods when used to determine the incremental payoff of the carbon market for renewable electricity generation projects, considering that the behavior of the price of Certified Emission Reductions, otherwise known as Carbon Credits, may be modeled using a jump-diffusion process. In addition, this paper analyses particular characteristics, such as absence of monotonicity, found in trigger curves obtained through use of the Grant, Vora & Weeks method to valuate these types of project.

  14. Incremental Adaptive Fuzzy Control for Sensorless Stroke Control of A Halbach-type Linear Oscillatory Motor

    Science.gov (United States)

    Lei, Meizhen; Wang, Liqiang

    2018-01-01

    The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.

  15. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  16. Indexing Density Models for Incremental Learning and Anytime Classification on Data Streams

    DEFF Research Database (Denmark)

    Seidl, Thomas; Assent, Ira; Kranen, Philipp

    2009-01-01

    Classification of streaming data faces three basic challenges: it has to deal with huge amounts of data, the varying time between two stream data items must be used best possible (anytime classification) and additional training data must be incrementally learned (anytime learning) for applying...... to the individual object to be classified) a hierarchy of mixture densities that represent kernel density estimators at successively coarser levels. Our probability density queries together with novel classification improvement strategies provide the necessary information for very effective classification at any...... point of interruption. Moreover, we propose a novel evaluation method for anytime classification using Poisson streams and demonstrate the anytime learning performance of the Bayes tree....

  17. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    Science.gov (United States)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  18. Incremental Capacity Analysis of a Lithium-Ion Battery Pack for Different Charging Rates

    DEFF Research Database (Denmark)

    Kalogiannis, Theodoros; Stroe, Daniel-Ioan; Nyborg, Jonas

    2017-01-01

    -depth investigation of two battery packs composed of 14 Lithium-ion cells each; for the purpose of evaluating the applicability and the challenges of the ICA on a battery pack level by means of different charging current rates. Also, at a certain charging current, the influence of the temperature on the ICA curves......Incremental Capacity Analysis (ICA) is a method used to investigate the capacity state of health of batteries by tracking the electrochemical properties of the cell. It is based on the differentiation of the battery capacity over the battery voltage, for a full or a partial cycle regarding...

  19. Incremental Risks of Transporting NARM to the LLW Disposal Facility at Hanford

    International Nuclear Information System (INIS)

    Weiner, R.F.

    1999-01-01

    This study models the incremental radiological risk of transporting NARM to the Hanford commercial LLW facility, both for incident-free transportation and for possible transportation accidents, compared with the radiological risk of transporting LLW to that facility. Transportation routes are modeled using HIGHWAY 3.1 and risks are modeled using RADTRAN 4. Both annual population doses and risks, and annual average individual doses and risks are reported. Three routes to the Hanford site were modeled from Albany, OR, from Coeur d'Alene, ID (called the Spokane route), and from Seattle, WA. Conservative estimates are used in the RADTRAN inputs, and RADTRAN itself is conservative

  20. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....